If you are at a university that has graduate students, you have probably heard about whether your university is an R1 or R2 or R-whatever research institution. Universities tout their position in this ranking system, awarded by the Carnegie Foundation, to denote how “prestigious” they are in terms of research. From 1994, the ranking used to be given according to how much federal research funding they were awarded.
Source: clipart panda
Because of this, all the ranking told you was how much federal money a particular university received. This system is incredibly flawed. For example, if you have faculty more dedicated to writing grants and less dedicated to teaching, mentoring graduate students, publishing articles or doing other activities that are supposed to be the mainstay of academia, then certainly you will get more money. However, this will be at the expense of teaching, mentoring, publishing, etc.
Yet as a result, many universities concentrated on recruiting faculty in fields that could score large grants. Teaching became a sideline, and teaching duties were typically lifted, as a reward, from those large grant earners and foisted onto poorly-paid visiting or adjunct faculty, who might be excellent teachers, but were (and are) nonetheless treated as second-class citizens in academia. Having to teach and mentor students became more of an inconvenience for grant administrators rather than the primary reason for existence of professors.
It should be obvious that not all fields of research are as well funded as others.
Firstly there is an immediate and major bias in favor of laboratory or university-based research, as these types of studies have a high overhead tithe that is required by the university – 75% of the cost of the research or more), which effectively doubles the cost of research, thus increasing the amount of money earned if your grant application is funded. Field studies, such as ecological projects, often have low, or even no overheads (10-20% is a common rate). This means that the “importance” of lab-based research over field research is doubled or more when it comes to the research rankings.
Administrators like high overheads, as administrative offices often get a percentage of these overheads (which technically are for running costs of laboratories, heating, lighting and so forth, but often overheads don’t actually go to pay these costs). This cut can be 50% or more of the overhead costs and effectively gives administrators a massive “slush fund,” over which there is often little oversight or accountability.
Secondly, some research is intrinsically inexpensive. For example, if you pursue anthropology in developing countries, you could feasibly do years of work immersed in communities for literally a few thousand dollars. This would decrease the amount of money that you would bring in (unless you artificially inflate the cost of your research, which is often encouraged by administrators). However, fields with expensive equipment and operational costs have agencies that provide larger grants. A research group with a large hadron collider will be more favored than a group that does grassroots conservation work in developing countries.
As a non-governmental marine mammal scientist, I can tell you that grants for marine mammal research are few and far between and competition is fierce (e.g., in 2014 the National Science Foundation only funded nine marine mammal research projects across the whole of the US; whereas last year there were 871 grants for astronomy). The ratio of number of scientists in the US to NSF grants is over 100:1 for marine mammologists, versus approximately 8:1 for astronomers. Not all fields are funded equally.
Thirdly, Federal agencies allocate funds according to political priorities. The current US administration is, for example, a climate change denier, and environmental regulation funding for research in these areas will probably disappear. If you’re in a politically unpopular field of study, you will be less valued by the administrators. Researchers that specialize in topics that are currently not politically “in vogue” will be penalized. The two main areas of growth for research funding, which are unlikely to be defunded by the federal government, are (see “In the future all scientific research will be funded by Taco Bell …” ) military research and biomedical science (especially funding for diseases such as cancer). Many, many universities have shifted their focuses to concentrate on these fields of research. This has been especially prevalent in biology departments, where the number of ecologists, animal behaviorists, and conservation biologists (through, e.g., leaving/retirement/being denied tenure) has declined and their positions replaced by more profitable biomedical or bioinformatics researchers.
As mentioned above, the R1 status was based on receiving federal grants alone. What about funding from non-federal sources? Well, grants from foundations, NGOs, crowdfunding, or other non-federal sources are large discountedas the ranking largely relies on government data.
The Carnegie Institute, to its credit, recognized some of the flaws in the system and the divisiveness that it was causing and in 2000 changed its categories to denote which universities had graduate programs and which were research intensive. This should have eliminated the research funding Thunderdome that had developed, but many universities kept calling themselves “R1” universities (even though the category technically no longer existed), and continued focusing on how many federal funds they received as their indicator of research status in their marketing materials.
In 2015, Carnegie reinstated the R1 (and R2 and R3) category, although this time they did separate science & engineering and non-science & engineering fields in terms of funding as they recognized that there was a big disparity in grant giving for these fields, but they didn’t take into account the disparities in funding within these broad categories. For example, conservation is much more poorly funded than cancer research.
The changed system also did take into account the number of graduate students a university hosted, in addition to federal funding. This at least balances penalties, so some of the institutions that cut back on their commitment to mentoring graduate students and higher level teaching to concentrate on grant winning are now feeling some pain.
Still, taking on graduate students and teaching continues to be a hindrance to an academic’s career. Teaching higher level classes and mentoring students takes time, a lot of time. At the end, the students will produce publications, but the supervisor will not be the first author. So taking on a graduate student takes away time academics could be using to write grants. I have a lab full of graduate students – I spend roughly half my working day answering questions, editing papers, editing thesis proposals, helping with analysis and generally mentoring students. With teaching and writing up manuscripts for publication, there is little time for anything else, and what little I have I often devote to service, such as working on department committees, organizing events, and so on. I did all of this in a period when universities started winding down graduate programs, because they distracted from grant writing. The revised Carnegie ranking at least recognized that producing graduate students was an important part of research.
However, the categorization only takes into account the number of students graduating. This could immediately lead to problems with universities trying to scam the system, by churning out many low quality graduate students they could have an advantage in the ranking system. Personally, I have seen PhD dissertations from certain high ranking institutes that are little more than literature reviews, to which I wouldn’t award a master’s degree, let alone a PhD. It could set up a system where faculty will be pressured to pass graduate students, and mentor more than they are comfortable with, just to increase graduation rates. Moreover, the ranking system does not evaluate graduate student quality by the gold standard of academic quality control – publications in bona fide peer-reviewed journals. Regardless, the graduation rate is still overshadowed by federal funding received as a rank determinant.
However, not only does the ranking system not evaluate graduate students on peer-reviewed publications, but nowhere does it take into account publications by research faculty. Publications used to be the way to measure research output and quality for academics, and still is in most countries, but now US administrators focus on grants, and publications are an afterthought. Carnegie (or at least the University of Indiana which is now running the ranking system) is, however, going to introduce a bibliometric component to the ranking system in the near future. There is the problem that different fields have different outputs in terms of publication (e.g many short papers in biomedical sciences versus books in some social sciences and humanities), but this, at least, is an area where there has been much academic debate and attempts at producing a fair means of comparing research productivity via publications.
So in short, as it currently stands “R1” status says virtually nothing about the number of ground breaking studies conducted by a university, it says nothing about how these studies are received by academics in that field, it says nothing about the impact of these studies on humanity and the environment. It says little about how good the university is at producing new, innovative, ground-breaking young researchers. “R1” status is hugely biased and ultimately favors institutions that get funds for hot topic, well-funded, and politically-popular research fields, such as human health and the military. What it certainly does not do is measure excellence in research.
It is currently a divisive ranking system, and because of administrators trying to game the system, it is crippling real, innovative academic research and turning universities from institutions of teaching, learning and discovery into institutions of miserly money-grubbing.