From the point of view of scholarship, these are challenging times in academe. The reward structure is suboptimal, focusing increasingly on measurements rather than what is being measured, and emphasizing indirect flow from external funding over the intrinsic value of the research being funded. Fifty years ago, there was widespread agreement that research funding was an input to a research program. These days it is often perceived as an end-in-itself. Regrettably, some university administrators take a transactional approach to research funding: if it carries a full indirect rate and isn't illegal, it's good research.
Further, while performing their roles in teaching, research and service, modern academics now have to spend a measurable amount of their time avoiding the minefields produced by the weaponization of woke wars, cancel cultures, political correctness; Title IX, FERPA, and affirmative action regulations; and state memory and gag laws. Make no mistake about it. The effects of these distractions on the quality of scholarship and teaching are measurable.
Since the enlightenment, universities have been perceived as a marketplace for ideas - even ideas that might be unpopular or contentious. But all-too-frequently, especially within some regions of the U.S., this perception is giving way to the notion that a public university should be a center for ideological inculcation and/or advanced job training. The focus of education is shifting to marketable skills, and the need for critical thinking has given way to mimicry of the attitudes of the power elite. Of course, this is not a new phenomenon. Before the enlightenment, education was largely left in the hands of the dominant religious orders or local governments, and enrollment was carefully constrained by race, ethnicity and gender. While the constraints have shifted somewhat, modern gestures toward un-enlightenment are having a similar effect.
In our lifetimes, perhaps the best-known case of political interference with the administration of a major public-supported university involved the firing of the University of California system president Clark Kerr in 1967. Kerr was perceived as too liberal by the prevailing state leadership when he espoused the view that it was a cardinal mistake to energize universities to promote ‘safe ideas' to students. He argued that it was far more important for universities to provide environments where students felt safe to consider new ideas. [1] Dominant politicians of the period found such expressions unacceptable. The most recent similar example of a political purge of a senior university executive by politicians involves the takeover of Florida's New College 2023, [2][3] which prompted nearly half the faculty to resign. [4] This latest purge was far more blatant than that at UC Berkeley nearly seven decades earlier. In the Florida case, some trustees clearly targeted “diversity, equity and inclusion initiatives” [5] in an effort to achieve a state-decreed illiberalism.
The UC Berkeley and Florida New College cases are not isolated examples of the invasion of the modern university being manipulated by external partisan interests. Neither is this invasion restricted to university administrators. Individual faculty are also being attacked. [1] What is more, the external threat vectors go beyond partisan state politicians to include benefactors and donors, federal funding agencies, religious organizations, tribalists of sundry stripes, and even foreign governments and agencies, all in attempts to achieve a reformation of higher education based on non-pedagogically-inspired agendas and biases, especially including those based on principles of social conservativism. [ 6] [7][8][9] But the critical point that is too often overlooked is that this new era of academic un-enlightenment extends well beyond social conservatism, attacks on diversity, equity and inclusion initiatives, and assaults on hot-button issues like critical race theory, LGBTQ rights, gender equality, and even progressive education itself. The modern era of un-enlightenment is motivated by a political demand for models of education that involve affirmative ideology-shaping rather than curiosity-inspired knowledge acquisition that is reminiscent of Orwellian and Huxleyian dystopian novels.
In any event, current attempts to move away from the notion that tax-supported, liberal education is a public good, and toward the view that tax-supported education should be limited to the preservation of specific, ideologically-oriented traditions to the exclusion of others is well-documented. However, in addition to this external erosion of scholastic integrity and purity facing the academy, there are also signs of internal erosion of a very different nature. We take these up in turn.
In addition to the erosion of academic integrity and standards from outside the academy, there are also internal forces that, while not as dramatic, are also contributing to current academic chaos. One of the most recent examples was the moral breakdown identified by the recent Thousand Talents Program prosecutions. [6] However, the ethical deficiencies that were shared by the academics, researchers, and scholars who were convicted were not espionage-related. Rather, the convictions demonstrated that these individuals (a) were not being transparent with employers, (b) violated institutional conflict of interest policies, (c) violated trust through deception, and (d) provided flawed or insufficient reporting and accounting to supervisory authorities. From a legal perspective, the convictions revealed mostly pedestrian illegalities. Wire fraud, tax fraud, making false statements, destruction of evidence, visa fraud, commodities fraud, and smuggling were among the most frequent violations. As the perpetrators were largely well-educated technologists and academics associated with American universities, these results suggest that perhaps the over-arching problem uncovered by the Thousand Talents Program prosecutions may well be fundamental weaknesses and deficiencies in institutional hiring practices. It should not come as a surprise to anyone that the dictum that capricious moral compasses make for poor employees applies to higher education as well as in other vocations.
A second source of integrity erosion within the academy is the detection of research fraud and research misconduct. For example, within the past year, Nature, Science, and The Wall Street Journal have all reported on an investigation of research misconduct by a faculty member in physics who claims to have developed the first room-temperature superconductor. [10] [11] Although some of the principle publications resulting from this apparently flawed research have been withdrawn [12] or retracted [13], the public relations damage to the reputations to the host institution and discipline, not to mention the faculty member, was already done. Although examples of research fraud and misconduct are exceedingly rare in the U.S., when they occur they are a media bonanza for external interests that seek to undermine the reputation of modern universities and their faculty. To those unfamiliar with the ways of the academy, such rare ethical breaches can be misinterpreted as undermining the value of the entire pursuit for diversified, well-rounded education.
A third source is the increasing use of so-called paper mills that market bogus scholarship. In a recent article in Science, Jeffrey Brainard reports that “journals are awash in a rising tide of scientific manuscripts from paper mills - secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship.” [14] One example is associated with a Russian website ( www.123mi.ru ) that apparently enjoys a thriving business (in Russia, at least). Anna Abalkina reported that 434 published papers are potentially linked to this Russian paper mill. [15] While online paper mills have served university student communities for decades, offering such a service to faculty is a baroque new variation on this theme.
Journal hijacking or cloning is a related phenomenon. In this case, phony journals pose as legitimate. A cloned website that has a similar look-and-feel to the veridical counterpart offers up fabricated, plagiarized, ghostwritten contributions, or purloined articles that violate copyright, involve text recycling, or perhaps are a product of generative AI. This falls under the rubric of what Abalkina calls ‘author manufacturing.' [16] It should come as no surprise that in the online marketplace these services overlap with other fraudulent services and likely share common criminal ancestry. In addition to paper mills and journal hijacking we must also add the use of vanity presses and predatory publishing. These vehicles are specifically used to exaggerate scholarly productivity.
Closely related to research fraud and misconduct is a fourth facet in the erosion of academic integrity: plagiarism. Coincidentally, Science reported that plagiarism was also connected with the apparently bogus superconductor research published in Nature. [11] Plagiarism is one element in the tidal wave of challenges facing modern publishers, as they struggle to maintain profitability in an era where reading is no longer regarded as fundamental and content aggregation and paywall circumventions are the norm. But beyond that they have to deal with legal liabilities due to ill-behaved authors and enthusiastic attorneys attract libel, slander, and copyright infringement litigation. (cf. https://copyrightalliance.org/ ). Without question, plagiarism is the topic that is the most threatening to a scholar's professional reputation. [17] Within recent months several cases have received extensive media coverage, including an M.I.T. professor [18], the president of a prestigious Ivy League university [19], and a dean of an engineering college. [20] These are very visible assaults on the prestige of the academy in the eyes of the public.
A fifth source is the proliferation of questionable multi-authorship practices. Kuperman and Sokol [21] list five incentives for multi-authorship: (1) the pressure to publish tends to reward quantity, (2) research may span several specialization areas, (3) a natural desire to collaborate, (4) the need to recognize co-PIs, post-doctoral fellows and research associates connected with grants, and (5) a perceived increase in likelihood of acceptance. To these incentives we should also add (6) dilution of level-of-effort of each contributing author, (7) the practical value of adding co-authors as courtesy recognitions, and (8) the impact of co-authorship horse-trading or the quid pro quo effect. In f act, when investigating the relationship between scholarly productivity and the quid pro quo effect. Kuperman and Sokol observed a negative correlation: being a frequent co-author was somewhat inversely correlated with being a frequent first or corresponding authorship.
In fact, their survey confirmed that “ for a significant fraction of the respondents (47%) the quid pro quo effect associated with co-authorship is [considered] important.” [21] This result is also not unexpected due to at least two factors. First, administrative bean counting creates a moral hazard by incentivizing the use of multi-authorship to inflate productivity. Second, publishers and professional societies customarily do not require authors to disclose their level of contribution to a publication, either by percentage of contribution or specific role in the enterprise. In the extreme cases, authors may not have read or edited the report they allegedly co-authored – a practice known as “gate crashing.”
Consider, for example, the article in the 15 May 2015 issue of Physical Review Letters that reported the discovery of the Higgs boson mass. [22] This article listed 5,154 co-authors which yields an author/page ratio of 573:1, and an approximate author/word ratio of 1:1. There is simply no way to reasonably apportion credit or recognition in such cases. This is not to disparage the underlying research, but rather the apparent absurdity of the manner of presentation chosen to represent the work.
While there are advantages of multi-authorship, there is no question that there is a tendency to proliferate co-authorship beyond necessity and legitimacy. This is a consequence of the historical evolution of multi-authorship from the centuries-old tradition of rewarding scholarly publication based on sole authorship. The traditional reward structure for scholarly productivity doesn't scale well for groups. What was missing in this evolution was some additional measure of accountability – e.g., a common disclosure of individual contribution agreed to by all co-authors. Even a fractional estimate would be a welcome improvement. But failing that, there will be an inevitable tendency to proliferate multi-authorship. Ironically, one variation on this theme is to add co-authors without their knowledge and permission. [23] A perception of multi-authorship chicanery can only contribute to public contempt of what might otherwise be a worthwhile scholarly effort.
Finally, we add a variation on the theme of multi-authorship abuse to be found in the unwarranted emphasis on so-called “impact measures” of research quality.
The creation of impact measures for the evaluation of impact of published research has become a cottage industry in academe. While much of what we say will apply to such measures as such and in general, we will focus here on only the most popular example at this writing: the h-index. The h-index metric is defined as the number N of publications that have been cited N-times in a rank ordered list. The h-index is widely used by popular online bibliometric websites and indexing services such as Google Scholar (scholar.google.com) and Semantic Scholar (semanticscholar.org). An initial caveat is in order as a preamble to the discussion to follow: it is not impact measures in themselves that are problematic. Rather, it is the unpropitious uses to which they are put.
We begin with some history. Eugene Garfield launched the field of bibliometrics in the 1950s at a time when a premium was placed on meta-level analysis of scholarship. [24] This was paradigmatic example of curiosity-driven research. Garfield sought to address a looming question whether there could be a simple, quantitative way to evaluate scholarship – not an unreasonable question. However, the fact that a question is reasonable, doesn't entail that any particular answer is reasonable. Moreover, the question is begged whether it makes sense to apply a quantitative measures to a qualitative assessment in the first place. In the case of a scholarly publication, a quantitative metric will be said to approximate a qualitative evaluation only if it can somehow mirror the judgment of an ideal evaluator - an unbiased, fair individual who is perfectly informed of the literature. Measuring convergent validity (e.g., by a high Pearson's correlation coefficient) works well enough to assess mutual relationships (covariances) between numbers, but not necessarily for subjective values. It must always be remembered that correlations may be coincident, but coincidence entails neither causality nor any deep understanding of the underlying phenomena. For example a correlation between per capita U.S. yogurt consumption and the price of a publicly traded stock is likely spurious. (see https://www.tylervigen.com/spurious-correlations ).
In particular, the h-Index [25] is a bibliometric metric that combines both numbers of publications and numbers of citations of those publications in a single metric. That's all that it does. While claims have been made that that the h-index correlates well with Nobel prizes, professional recognitions, or sundry expert opinions, these claims should be taken cum grano salis.
Therein lies the rub with the use of the h-index. While it is a clever quantitative metric in Garfield's sense, far too much has been made of it in the assessment of scholarly impact by those ill-equipped to understand the limits of the metric. As J. E. Hirsch, the creator of the h-index, made clear, the h-index is only an approximation (of what isn't agreed upon), admits of exceptions, may not apply equally across disciplines, and favors popular subfields. [25] We might also add that such indices can favor incrementalism over imagination and insight. In a sense, the h-index is a standard – and not an unreasonable one – for volumetric measures of publication popularity. From my experience, these shortcomings are only paid lip service in practice, and almost completely downplayed when metrics are used in academic evaluations and assessments. As a result, the h-index is easy to misuse as a weapon against those who don't embrace the local, prevailing spin on the publish-or-perish mantra. I find it amazing that, excluding the community of information scientists and bibliometricians, the champions of citation indices are almost universally unfamiliar with the primary research. Thus, misuse seems inevitable. Of course, this is not the fault of the measurement: one normally doesn't fault a soil survey for inefficient farming practices.
In addition, the h-index has a serious deficiency that is related to our earlier observations: it is insensitive to distinctions between the level-of-effort contributed by multiple authors. Hirsh recognized this in a subsequent paper.
Perhaps the most important shortcoming of the h-index is that it does not take into account in any way the number of coauthors of each paper. Thus, an author that publishes alone does not get any extra credit compared to one that routinely publishes with a large number of coauthors, even though the time and effort invested per paper by the single author or by each of the coauthors in a small collaboration is presumably larger than the corresponding one for a member of a larger collaboration. This can lead to serious distortions in comparing individuals with very different co authorship patterns, and gives an incentive to authors to work in large groups when it is not necessarily desirable… . it is sometimes a grey area whether or not a minor contributor should be included as author of a paper; with the h-index and other current bibiometric [sic] indicators there is no penalty to add authors to a paper and as a consequence there can be an incentive to do so, due to implicit or explicit quid pro quo expectations. [26]
For that reason, he proposed another metric, h, to overcome the deficiency. [26] Briefly, the improved h-index, normalizes the value to a co-author's “core” h-index to limit any tendency to inflate h-indices by proliferating co-authorship. The h metric is more sophisticated than fractional credit measures (such as the inverse of the number of co-authors) in that increasing credit is given to co-authors in proportion to the degree to which a publication's citation impact significantly contributes to the author's “core h-index.” While, h is an improvement over the original h-index in terms of providing a disincentive to ‘game' the metric by some of the questionable multi-authorship practices identified earlier, it also is not without limitation. For one, like its antecedent, h remains a quantitative approximation of a qualitative assessment of scholarly impact. No tweaking of metrics can overcome that shortcoming. For another, it remains agnostic to potential bibliometric manipulation through the unpredictable influence of predatory publishing, various types of gray and open access literature, by the insertion of coercive and patronage citations, and the potential corrupting influence by generative AI. But most critically, h simply isn't widely used. Why would that be?
First, the h-index is computationally more expensive than the h-index. [26] Second, it doesn't serve the interests of junior faculty as well. Since, it has become fashionable to use the h-index to justify academic advancement, one would expect junior faculty and their champions to discourage its use. Third, there is no commercial incentive to avoid gaming citation metrics. The online purveyors of impact assessments (e.g., search engines) enjoy the commercial advantage of offering any curiosity-inspired feature that they can offer irrespective of whether these features produce anything of social value. Such features increase the perceived commercial utility of a site, which in turn increases the use of the site, and that in turn increases online revenues. Put simply, providing h-index scores has become a popular, marketable online feature on online services that are not invested in preventing the misuse. Since the scores may be calculated without much overhead, are singularly unique, and offer a seemingly plausible alternative to the costly alternative for those in charge of assessing scholarly work, this is perceived as a win-win. No revenue downside to the provider, and the user is spared the effort of reading and serious contemplation - a seemingly objective way to support decisions with minimal cognitive investment. What's not to like?
In short, the primary objection that one must draw from the use of impact measures has to do with the unintended consequence of discouraging a more traditional, thoughtful, and thorough evaluation of scholarly achievement. To paraphrase Melvin Kranzberg's first law of technology: bibliographic metrics like citation indices are neither good nor bad, nor are they neutral. [27]
External threat vectors to higher education are the easiest to recognize when the sources of influence are subjected to transparency. Such was the case when the takeover of the New College of Florida by the controlling political party was reported in the media. In this case, the threat to academic independence was obvious to everyone who cared to be informed. When media coverage is extensive, eventual public accountability is likely. That said, extensive media coverage of outside influencing is relatively easy to conceal – even in the case of public institutions. This falls under the rubric of “quiet influencing.” One of the best-known players in the “quiet influencing” space are foundations associated with the Koch brothers. [27] [28] Over the past several decades, investigative journalists have reported on many examples of charitable donations to higher education with strings attached, including but not limited to, the requirement that the external agencies have the right of refusal in faculty selection, [29] course approval and book selection, [30] sometimes without the approval of the existing faculty. [31] [32]
While the effects of external threat vectors can be blatant and the reactions immediate, the effects of internal threat vectors are reputational, nuanced, and delayed. Incidents of plagiarism, research fraud, questionable multi-authorship practices, bogus publishing and the like are exceedingly rare, but they receive an inordinate amount of public attention at great reputational cost to the individuals and organizations involved. It would be a mistake of the first order, however, to dismiss them. Over time they are doing a great deal of damage to the reputation of the academic enterprise. In fact, these incidents may even be weaponized as in the recent case of the resignation of the president of a prestigious Ivy League university over what appears to be relatively minor indiscretions involving plagiarism. [33][34][35]
The over-emphasis on impact measures is the outlier among internal threats. While there remains an ongoing debate over deficiencies of the particular metrics such as the h-index, [36][37][38][39] as we've argued the real problem lies not with the measures as bibliometric tools, but that their use frequently involves an abrogation of responsibility to personally invest in the maintenance of academic standards and integrity . [40] In the academy and the professions, the search for quick, labor-saving alternatives to the exercise of sound judgment should be resisted at all cost. The mere fact that something is easy to do doesn't make doing it a good idea.
[1] H. Berghel, A Collapsing Academy, Part II: How Cancel Culture Works on the Academy, Computer, 54:10, 2021. (available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9548027 )
[2] P. Okker, I Was president of Florida's New College. Then I Was Fired., The Chronicle of Higher Education, July 19, 2023. (available online: https://www.chronicle.com/article/i-was-president-of-floridas-new-college-then-i-was-fired )
[3] M. Goldberg, DeSantis Allies Plot the Hostile Takeover of a Liberal College, The New York Times, Jan. 9, 2023. (available online: https://www.nytimes.com/2023/01/09/opinion/chris-rufo-florida-ron-desantis.html )
[4] C. Suarez, D. Royal and N. Ellis, Students, professors report chaos as semester begins at New College of Florida, CNN US, August 27, 2023. (available online: https://www.cnn.com/2023/08/26/us/new-college-of-florida-chaos-reaj/index.html )
[5 S. Clay and N. Ellis, New College of Florida trustees vote to abolish DEI programs, even as students protest against conservative overhaul of school, CNN US, March 1, 2023. (availalble online: https://www.cnn.com/2023/02/28/us/new-college-florida-board-meeting-reaj/index.html )
[6] H. Berghel, "The Thousand Talents Program Prosecutions in Context," Computer , 56:11, pp. 95-102, Nov. 2023. (available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10286240 )
[7] D. Payne, Trump ally Hillsdale College pitches 1619 Project counterweight, Politico, 07/21/2021. (available online: https://www.politico.com/news/2021/07/21/trump-ally-1619-project-500464 )
[8] American History and Civics Lessons Including the Hillsdale 1776 Curriculum, Hillsdale College. (available online: https://k12.hillsdale.edu/Curriculum/Hillsdale-K12-American-History/ )
[9] The 1776 Report, The President's Advisory 1776 Commission, January, 2021. (available online: https://trumpwhitehouse.archives.gov/wp-content/uploads/2021/01/The-Presidents-Advisory-1776-Commission-Final-Report.pdf )
[10] D. Garisto, Exclusive: official investigation reveals how superconductivity physicist faked blockbuster results, Nature, 06 April, 2024. (available online: https://www.nature.com/articles/d41586-024-00976-y )
[11] D. Garisto, Plagiarism allegations pursue physicist behind stunning superconductivity claims, Science, 13 April 2023. (available online: https://www.science.org/content/article/plagiarism-allegations-pursue-physicist-behind-stunning-superconductivity-claims )
[12] E. Snider, et al, RETRACTED ARTICLE: Room-temperature superconductivity in a carbonaceous sulfur hydride, Nature, 586, pp. 373-377 (2020). (available online: https://www.nature.com/articles/s41586-020-2801-z#citeas )
[13] D. Castelvecchi, Nature retracts controversial superconductivity paper by embattled physicist, Nature, 07 November 2023, (available online: https://www.nature.com/articles/d41586-023-03398-4 )
[14] J. Brainard, Fake scientific papers are alarmingly common, Science, 9 May 2023. (available online: https://www.science.org/content/article/fake-scientific-papers-are-alarmingly-common )
[15] A. Abalkina, Detecting a network of hijacked journals by its archive, Scientometrics 126, 7123–7148 (2021). (available online: https://link.springer.com/article/10.1007/s11192-021-04056-0 )
[16] A. Abalkina and A. Libman, The real costs of plagiarism: Russian governors, plagiarized PhD theses, and infrastructure in Russian regions. Scientometrics 125, 2793–2820 (2020). (available online: https://link.springer.com/article/10.1007/s11192-020-03716-x )
[17] B. BcMurtrie, A Brief Guide to How Colleges Adjudicate Plagiarism Cases, The Chronicle of Higher Education, January 3, 2024. (available online: https://www.chronicle.com/article/a-brief-guide-to-how-colleges-adjudicate-plagiarism-cases )
[18] K. Long, J. Newsham and N. Parakul, Academic celebrity Neri Oxman plagiarized from Wikipedia, scholars, a textbook, and other sources without any attribution, Business Insider, Jan 5, 2024. (available online: https://www.businessinsider.com/neri-oxman-plagiarize-wikipedia-mit-dissertation-2024-1?op=1 )
[19] A. Lawrence, Harvard's Claudine Gay was ousted for ‘plagiarism'. How serious was it really?, The Guardian, 6 Jan 2024. (available online: https://www.theguardian.com/education/2024/jan/06/harvard-claudine-gay-plagiarism )
[20] B. Borrell, Exclusive: Embattled dean accused of plagiarism in NSF report, Retraction Watch, February 28, 2024. (available online: https://retractionwatch.com/2024/02/28/exclusive-embattled-dean-accused-of-plagiarism-in-nsf-report/ )
[21] V. Kuperman and G. Sokol, On the causes and ramifications of multi - authorship in science, Scientometrics, Scientometrics (2024). doi 10.1007/s11192-024-04963-y. Online only: https://link.springer.com/article/10.1007/s11192-024-04963-y .
[22] G. Aad et al., “Combined measurement of the Higgs Boson mass in pp collisions at vs = 7 and 8 TeV with the ATLAS and CMS experiments,” Phys. Rev. Lett., vol. 114, no. 19, 191,803, pp. 1-33, 2015. (available online: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.114.191803 )
[23] L. Yournshajeklan, The dean who came to visit – and added dozens of authors without their knowledge, Retraction Watch, April 10, 2024. (available online: https://retractionwatch.com/2024/04/10/the-dean-who-came-to-visit-and-added-dozens-of-authors-without-their-knowledge/#more-129106 ).
[24] Garfield, E., Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas, Science, Vol 122, Issue 3159, 15 Jul 1955, pp. 108-111 ( https://www.science.org/doi/abs/10.1126/science.122.3159.108 )
[25] J. Hirsch, An index to quantify an individual's scientific research output, Proc Natl Acad Sci U S A., 2005 Nov 15; 102(46): 16569–16572. (available online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1283832/ )
[26] J. Hirsch, An index to quantify an individual's scientific research output that takes into account the effect of multiple coauthorship, Scientometrics, 85, pp. 741-754, 2010. (available online: https://link.springer.com/article/10.1007/s11192-010-0193-9 )
[27] M. Kranzberg, Technology and History: “Kranzberg's Laws”, Technology and Culture, 27:3, Jul. 1986k pp. 544-560. (available online: https://www.jstor.org/stable/3105385 )
[27] K. Miller and R. Bellamy, Fine Print, Restrictive Grants, and Academic Freedom, Academe, May-June 2012. (available online: https://www.aaup.org/article/fine-print-restrictive-grants-and-academic-freedom )
[28] J. Mayer, Dark Money: The Hidden History of the Billionaires Behind the Rise of the Radical Right, Anchor, New York, 2017.
[29] D. Levinthal, Inside the Koch brothers' campus crusade, Public Integrity, March 27, 2014. (available online: https://publicintegrity.org/politics/inside-the-koch-brothers-campus-crusade)
[30] G. Jones, Universities, the Major Battleground in the Fight for Reason and Capitalism, Academe, AAUP, July-August, 2010. ( https://www.aaup.org/article/universities-major-battleground-fight-reason-and-capitalism )
[31] C. Flaherty, The UT Austin Liberty Institute? What's That?, Insider Higher Ed, September 24, 2021. ( https://www.insidehighered.com/news/2021/09/24/ut-austins-liberty-institute-whats-professors-ask )
[32] R. Gold, Who Is Bud Brigham, the Man Behind UT's “Liberty Institute”?, TexasMonthly, September 2, 2021. (available online: https://www.texasmonthly.com/news-politics/bud-brigham-liberty-institute-university-texas/ )
[33] A. Lawrence, Harvard's Claudine Gay was ousted for ‘plagiarism'. How serious was it really?, The Guardian, 6 Jan 2024. (available online: https://www.theguardian.com/education/2024/jan/06/harvard-claudine-gay-plagiarism )
[34] I. Ward, We Sat Down With the Conservative Mastermind Behind Claudine Gay's Ouster, Politico, 01/03/24. (available online: https://www.politico.com/news/magazine/2024/01/03/christopher-rufo-claudine-gay-harvard-resignation-00133618 )
[35] C. Purtill, How plagiarism-detection programs became an unlikely political weapon, Los Angeles Times, Jan. 21, 2024. (available online: https://www.latimes.com/science/story/2024-01-21/how-plagiarism-detection-software-for-academics-became-an-unlikely-political-weapon )
[36] S. Lehmann, A. Jackson, and B. Lautrup, Measures and Mismeasures of Scientific Quality, Physics and Society, 24 Jan 2006. (available online: https://arxiv.org/pdf/physics/0512238.pdf )
[37] M. Schreiber, A case study of the modified Hirsch index h m accounting for multiple coauthors. Journal of the American Society for Information Science and Technology 60 , 1274–1282, 2009. (available online: https://onlinelibrary.wiley.com/doi/10.1002/asi.21057 )
[38] L. Bornmann, H-D. Daniel, What do we know about the h index?, Journal of the American Society for Information Science and Technology, v. 58, I. 9, 2007. (available online: https://asistdl.onlinelibrary.wiley.com/doi/full/10.1002/asi.20609 )
[39] P. Batista, M. Campiteli, O. Kinouchi, & A. Martinez, Is it possible to compare researchers with different scientific interests? Scientometrics 68 , 179–189, 2006. (available online: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=9ce4be90f01edc2e6347550d921d3e2eb5553956
[40] H. Berghel, "A Collapsing Academy, Part III: Scientometrics and Metric Mania," Computer , vol. 55, no. 3, pp. 117-123, March 2022, doi: 10.1109/MC.2022.3142542. (available online: https://ieeexplore.ieee.org/document/9734274 )