Last autumn 2021, I applied for a Marie Skłodowska Curie Fellowship and I received the results in February 2022. I was not among the winners. So what? Like me, other hundreds of colleagues were not. What really matters is that, this time, unlikely my 2019 application, I received highly questionable feedback on my research proposal. The reviewers’ comments seemed to barely refer to my methods and rationale, revealing an inattentive reading of the proposal. For example, while politicians were not included in any way in the scope of my research plan, one comment was about me “interviewing politicians”. What I particularly found ethically shocking was one comment on healthcare professionals from Middle Eastern diaspora groups not being fit to participate in policy-making because they would be biased. Moreover, the reviewer commented that “This is an exceptional and risky way of relating with the research participants”.
My host institution encouraged me to request a re-evaluation of the proposal. I did, in vain; but the EU committee rejected my request of re-evaluation while stating that it agreed with most of the points raised by the reviewers (Ha! Without making a formal re-evaluation possible, you are self-entitled to rejecting the re-evaluation option?! A clear contradiction. And what background and expertise did they have, as a committee, to say they agree with the reviewers’ comments? A paradoxical and shallow response, to say the least). What really shocked me was the agreement on the bias related comment, which is still haunting me. To mention their response to that issue verbatim: “After a close reading of the ESR and the relevant parts of the proposal, the Committee cannot agree with the applicant that the statements indicated in the ESR are inappropriate. The Committee confirms that the comments refer to the experts’ assessment in relation to the related evaluation aspects and reflect the corresponding evaluators’ opinion on the proposal”. No worries if you found this answer tautological or not easily intelligible. This is just a heavy paraphrase to echo the reviewer’s racist, under-explained, and discriminatory comment. At that point, I had not only failed my MSCF application – which I would never consider ‘unjust’ per se – but also my attempt to get serious feedback on my proposal had failed.
Much to my dismay, straight after the results, I realised many colleagues were also unsatisfied with the feedback they had been provided with: some even reported offensive comments on the elected host institution. Others questioned the non-anonymity criteria of the selection process, stating that they found the reviewers’ feedback largely biased and sometimes the product of the reviewers’ internet search (e.g., checking the applicant’s political views or alike). Along with some colleagues based in different institutions, we intended to publish a public petition asking for a revision of the fellowship assessment criteria and greater rigor in the assessment process. For example, funding schemes such as the Wellcome Trust also set up an interview date to select the candidates, and demand a peer monitoring process of the criteria used for assessment. This method requires working towards a single report and evaluation, rather than having a member of the committee summarising the comments of two reviewers who evaluate the proposals independently. Although what can be considered the best assessment criteria remains a moot point, one of our arguments was that scientific rigor can hardly be guaranteed when the process is not strictly coordinated. Moreover, the current political economy of research, with academics lacking time and being put under pressure with multiple tasks, does not help.
Unfortunately, too many of my colleagues eventually decided not to sign and publish the petition, probably not wanting their name to be publicly associated with it – as though this could somehow tarnish their reputation – or, even more likely, they did not want themselves to be associated with failure. I found this episode very telling about how academics often purport to serve as public intellectuals while only intellectual success is paraded. All the rest, like poor quality assessment processes or unjust politics of assessment, needs to be swept under the rug.
Let’s overturn that rug. As a researcher, I also care about learning from failure and from precious feedback, while refusing the success-or-failure binary logic that undergirds the contemporary horror of neoliberal research. As the Pink Floyd used to sing, “Is there anybody out there?”.