Monday, January 10, 2011

Assessment of Quality and Personal Suitability in Peer-Review Journals

This post relates the process I went through to find and assess relevant leading academic journals. There are a lot of peer-reviewed journals, but not all are created equal - some are more topically suitable but less esteemed - finding a balance is crucial. This post relates my specific process, but could be applied to others in any other academic field seeking to find journals suitable to their work.

Determining a suitable venue for academic publication is challenging for new scholars. One must consider both a journal’s suitability of content and approach as well as its potential influence. Although there are metrics to help determine the perceived esteem of a journal, gauging its overall value can be difficult. For example, journals rated highly by traditional metrics might be more inclined towards conservative methods and topics, which may ill suit emerging scholars pursuing innovative approaches and emerging topics. In addition, highly-rated journals often have lower acceptance rates decreasing the likelihood of publication. The delays of resubmitting to alternative journals may result work loosing its timeliness or being the first to publish on a given topic. This paper aims to explore the process I developed to ascertain suitable journals for publication of my chosen research area (i.e. the use of mobile social media in libraries or information repositories). I will also briefly address journal bibliometrics.

To initially select journals for inclusion, I compiled a list from personal sources. My personal selections were based upon my favoured sources for reading, recently cited journals, and those encountered during a recent literature review. Using this list, I then was able to determine the suitable categories of Australian Research Council (ARC) ranked journals, which were “Information Systems” (category #0806) and “Library and Information Studies” (#0807). Downloading the full ARC ranked outlets list enabled me to sort by category. I was quickly able to cull unsuitable journals based on their titles; others required an examination of the specific journal to determine its topical suitability. This resulted in 62 possible journals to consider. I then examined the editorial aims and scope of the 62 journals by reviewing the issues from the past two to three years. This review allowed me to filter out results based on an inappropriate scope (e.g. engineering) and to then reduce viable journals to 31.

To determine the perceived esteem and influence of a journal, I compiled various established bibliometrics, specifically the ARC’s assigned score, the five year impact factor, SCImago Journal Rank (SJR), and Source Normalized Impact per Paper (SNIP). Bibliometrics have individual strengths and weaknesses. The ARC score is based on the subjective assessment of Australian academics, and no specific formula appears to be made public. Critics of ARC rankings note that it overly favours coverage of Australian topics and unfairly allocates top grades based on a fixed, subjective percentage (i.e. the top five percent of journals in the given field) which penalizes journals in specialized or emerging fields. ARC rankings do allow a more holistic assessment of a journal, acknowledging the importance of more than just resulting citations. The impact factor measurements (both one and five year) rely solely on citations. Although this can result in a more objective metric, it has limitations. Impact factors do not account for differing citation behaviour and volume in different fields, nor do they consider the quality of the citing source or filter out author or journal self-citations. Furthermore, the Institute for Scientific Information index that impact factors are drawn from does not include open-source journals and does include citation sources that are not original research (e.g. review articles). The one year timeframe of the standard impact factor, I feel, does not adequately suit the citation behaviour of the information field. Consequently, I chose to use the five year impact factor. The SCImago Journal Rank (SJR) indicator and Source Normalized Impact per Paper (SNIP) score, I believe, are more transparent and suitable metrics for evaluation, both of which are provided by Scopus. Scopus, to begin, has a much larger database than Institute for Scientific Information (ISI) and includes open-source journals. SJR considers the prestige of a citation source in its formula (similar to Google’s Page Rank) and has a larger citation window (three years) than the impact factor. SNIP assesses the citation patterns of a given field to determine a field-specific score. For example, medical fields tend to cite work more quickly and frequently compared to humanities research.

Having compiled these four metrics per journal, I then weighed each equally to attempt to smooth out biases. My assessment assigned a potential three marks per each journal’s ranking per metric, for a maximum potential score of 12. One point was assigned for the approximate lowest third of rankings and three points assigned for the top third ranking. A key limitation with this approach is that if a journal does not appear in either ISI or Scopus index or have an associated bibliometric, they are given the lowest possible score regardless of the reason for their absence. Nonetheless, this system is useful to give a list of top-ranking, topically-suitable journals.

To determine suitability of editorial content, I assigned a five-point score, with five being deemed the most relevance to my research area and one the least. This was based on a review of the past three years of the journals issues. I looked for articles exploring similar topics to my interests and with a similar epistemological viewpoint to mine.

To attempt to wed journals with deemed high esteem with relevance to my personal areas of interest, I performed a comparison between both lists. I wanted to choose journals with a high overall ranking (i.e. my composite score) and thus determined a score of ten or higher was an acceptable level (twelve journals attained this level). I also wanted to choose journals with a high level of relevance to my research area and thus chose a score of four or higher (ten journals met this level). Any journals that met these criteria were determined to be high-ranking and personally-relevant. The journals that qualified (with their associated composite and relevance scores) are, Internet Research (11, 5), Interacting with Computers (11, 4), Journal of Academic Librarianship (11, 4), and Information Technology and Libraries (10, 4).

From this list of four journals, I attempted to find the acceptance rates of publication. This information would be useful to determine the likelihood of being published in the journal. I was unable to find this information for the above journals (in general, it seems this information is a insider secret)

This overall process has been useful in identifying new publication sources and in gaining a richer understanding of the perceived esteem of various journals. I believe this process has also resulted in a list of the most suitable journals for me to consider when I am ready to publish my research.

No comments: