What is science as sport? In sports we measure achievement by winning and so for science it means measuring success in research using metrics that measure esteem: How many citations do I have? What is my H-index? How prestigious are the journals in which I publish? How many grants have I obtained? How soon have I been promoted?

Why should this be a problem? I will talk about mathematics, because this is the field I know best and I can only guess how much the following applies to other fields. First, there are essential activities that are not captured by markers of esteem. The most important is the reviewing of papers. Ideally, a paper that is submitted to a journal is reviewed by one or two other mathematicians, who read the paper in detail and check that the proofs are correct. Reading a mathematical paper is hard work and takes time and each hour spent reviewing a paper is time not spent writing your own papers. And we see that often enough reviewers do not take the time to read a paper. And this means that in the prevailing publish-or-perish atmosphere we spend less time polishing and proofreading our own papers and also less time reviewing papers written by others. In consequence I claim, with no evidence apart from anecdotal evidence, that the overall quality of research papers is diminishing.

Second, the hunt for esteem leads to the search for the magical creature called *the **least publishable unit*. In the beginning scientists are motivated by the pursuit of knowledge by the desire to answer questions whose answers are not known. What happens when the questions turn out to be difficult? This is when mathematics becomes interesting, this is where research becomes exciting. But it also means that I am spending time “unproductively”, because I am not writing a paper. Half a year spent working on a problem is half a year not spent writing papers. And so it can be tempting to chip of a small subproblem that I can solve and write a paper about it. And then perhaps chip off another subproblem. And if after some chipping the main problem is still too big, there are always other chippable problems to be found.

Third, measuring mathematics in terms of esteem means that when discussing other mathematicians we stop asking the questions: What is he or she researching? What result has he or she proven? Instead we are asking the other kind of question: How many papers in the Annals of Mathematics or Inventiones Mathematicae have they published? How many NSF or EPSRC grants do they have? It is because the latter kind of questions are easier to answer. They don’t require us to think about actual mathematics or to make judgements about whether a given subdiscipline is important or what the point of a theorem is. It even gives us the illusion that we can compare someone working on analysis of PDEs with someone doing algebraic topology without having to know much about either area.

Having said this, how robust is the scientific process if we treat science as a sport instead of pursuing it to increase our knowledge? It is a difficult question, because we all are pushed in this direction to some extent. In practice academic hiring and promotion is tied to markers of esteem: citations, publications and grants. And so the more appropriate question is: How much should we swim against the tide? How much time do we spend doing what is important for the community, for students and for mathematics but will not be measured in numbers? This encompasses many things: writing research monographs, developing high quality teaching materials, reading other research papers in detail. I don’t have an answer to this question, but there are hints—studies in psychology that cannot be reproduced, or in debates about foundational work in symplectic geometry—that point to cracks in the facade of science.