Service is one of the pillars of an academic life. In an academic career, service as a term is typically used as a wrapper for various forms of contributions made by faculty members to their institutions, their academic disciplines, and the broader community.
Service can take many forms . Within academic institutions, service duties often involve active participation in departmental or school committees, faculty governance, and mentoring roles. Faculty members may advise students, develop curricula, and engage in program assessment and accreditation processes. At the institutional level, service may involve membership on university-wide committees and administrative roles, such as department chairs or program directors.
Beyond institutional obligations, service also extends to professional involvement in societies, community engagement, and advocacy for critical issues as well as temporary appointments as program officers for funding agencies to help enable funding of new areas of research in our field of specialty. This multifaceted engagement strengthens not only the particular academic community but also the participants’ field in general. It is also a great networking opportunity for otherwise cloistered research scientists, regardless of the seniority or position we hold.
One of the less desirable service activities for academics is engagement in peer review. Reviewing research papers and grant proposals, serving as editors for academic journals, and contributing to conference organizations. In my personal experience, peer review is one of the most polarizing and brutish traditions of academic life- despite its necessity.
The concept of peer review doesn’t have a singular origin or a specific individual responsible for its invention. Instead, it has developed gradually over centuries in response to the evolving needs of the academic and scientific community. Rewind to the 17th century, we can see the early beginnings of peer review with the establishment of scientific journals- such as “Philosophical Transactions of the Royal Society,” which emerged in 1665. These early journals aimed to disseminate scientific knowledge and subject it to critical examination by peers—a rudimentary form of peer review. Fast forward to the 20th century, we witness the maturation of the modern peer review system and hundreds of journals in hundreds of specialty areas often published by professional societies or university press houses to help disseminate information. The idea of having independent experts rigorously assess each other’s research papers before publication became integral to maintaining the quality and trustworthiness of scholarly work.
Forward to the end of the 20th and the beginning of the 21st century- and we start to see a gradual emergence of several controversies surrounding peer review, without any specific starting point. Peer review devolved from a means of sharing knowledge within a small scholarly community to a brutish process riddled with issues of bias, abuse, nepotism, and gatekeeping. Journal impact factors emerged as peer review became more established and institutionalized. Needless to say, it became a significant expense to publish a paper – because somehow the publishing of scientific research moved from professional societies and university press houses who were treating the publishing process as a non-profit service activity- to a commercial for-profit business for private publishing houses.
Further- and not for the better- the journal impact factor became widely adopted as a tool for evaluating the prestige and importance of academic journals in various disciplines. It has been used by researchers, institutions, and publishers as a way to assess the significance of journals and make decisions about where to publish or seek research articles. While it has been a valuable tool for assessing journal influence, it has also been the subject of criticism and debate regarding its limitations and potential misuse in research evaluation.
The concept of the journal impact factor was introduced by Eugene Garfield, a pioneer in the field of bibliometrics and information science. Garfield founded the Institute for Scientific Information (ISI) in 1960, and in 1963, he developed the Science Citation Index (SCI), which was one of the first citation databases. The idea behind the journal impact factor was to provide a quantitative measure of the influence or impact of scientific journals by analyzing the frequency with which their articles were cited in other scholarly publications.
Now in the digital age- we have even more evaluation metrics to assess research impact- however superficial it may be. For instance – the h-index was proposed by Jorge E. Hirsch, a physicist at the University of California, San Diego. He introduced the concept of the h-index in a paper titled “An index to quantify an individual’s scientific research output” published in the journal Proceedings of the National Academy of Sciences (PNAS) in 2005. The h-index is a measure of both the productivity and impact of a researcher’s work and is widely used for assessing and comparing the scholarly contributions of individuals in academia and research. It has become an important tool for research evaluation and has been applied in various disciplines to quantify the impact of a researcher’s publication. There is also the i10 index that Google Scholar came up with. It is an index to measure an author’s productivity in terms of publications that have been cited at least 10 times. In other words, it counts the number of an author’s publications that have received at least ten citations each.
Online preprint publishing services like arXiv have their origins in the early 1990s. arXiv, one of the most well-known preprint servers, was established in August 1991 by physicist Paul Ginsparg. It was originally known as the “xxx.lanl.gov” server and hosted preprints in the field of high-energy physics. The name was later changed to arXiv.org, and it expanded to include other disciplines, such as mathematics, computer science, and biology. The concept of preprint servers alleviated some of the ugly problems of peer review and scientific publication process. It allowed for the rapid dissemination of research, increased collaboration, and a more open and transparent approach to sharing scientific knowledge. Since the establishment of arXiv, many other preprint servers have been created for various academic disciplines, further advancing the practice of sharing preprints online.
Around the same time as the pre-print servers and their use started to gain popularity, particularly in STEM fields, online journals started to emerge as well. Online scientific publishing has transformed the way research is disseminated, making it more accessible and efficient. However, alongside reputable online scientific publishers, it also instigated the rise of what is later termed “predatory journals“. While reputable publishers maintained high editorial and peer-review standards, ensuring quality and ethical practices, predatory journals exploited open access by prioritizing profit over quality and characterized by a lack of genuine peer review as well as solicitations of authors based on specialty through spam emails and a promise swift publication for a fee, in exchange for little to no editorial services.
Calls for reform in the publication and peer review process, particularly in areas like bias and transparency, have intensified in recent years. These controversies continue to evolve as the academic and research landscape changes, reflecting the dynamic nature of scholarly evaluation.
I tried to do as much peer review as I could make time for, even when holding non-academic appointments throughout my career. I have been quite selective about the drafts I reviewed and prioritized cases of first-first-author publications of young researchers, as well as the journals from which I have accepted such assignments. This is partially because I quite enjoy conveying genuinely constructive and helpful criticism as well as a word of encouragement to up-and-coming scientists in my field. One never forgets their first time of receiving a peer review that viciously shreds your years and years of work that was created through blood, sweat, and tears. That is to say – some people confuse rigor, concision, and succinctness with rudeness, hiding behind anonymity, and don’t feel the need to exercise common courtesy and respect to people seeking peer review of their work. A peer reviewer’s job, in my opinion, is to provide a rigorous technical assessment of the submitted works and make sure the submission is clearly written and properly thorough in descriptions of methods and protocols to enable a qualified person to be able to repeat the presented results and provides a convincing and technically sound interpretation of the results without much speculation. So, I have never had a review load of more than two or three publications per year so far.
In line with my current academic appointment, I will try to spend more time with peer reviews going forward. I will prioritize requests from journals published by professional societies in my field, as well as reputable open-access online journals. I am going to reserve sufficient time for >10 but <20 peer review assignments per academic year.
As I was going through my plans to increase my commitment to peer review, I received an invitation from the open-access online journal Frontiers in Genetics to join their editorial board as an Associate Editor for their Statistical Genetics and Methodology section. This all goes to say that I accepted their invitation, and as of November 2023, I will be serving as an associate editor for this journal.
I hope I can help more of my peers with their publications in this position going forward. Stay tuned for a review of the experience.
-Elhan