Several parts of the report invite comments, but I will focus here on one particular aspect: the notion of ‘impactful’ mathematics. The report wants to overcome the traditional division of mathematics in ‘pure’ and ‘applied’ and so it creates a new category—impactful mathematics.

What is impactful mathematics? The report mentions several well-known examples intended to show that pure mathematics can be impactful. Graph theory is used to analyse social networks, harmonic analysis underlies much modern signal processing and number theory is the basis of modern encryption methods.

The problem with the label ‘impactful’ is that it can only be applied in retrospect. Sometimes decades pass between the mathematical discovery and its impact. Elliptic curves, for example, which are used in the encryption and signature algorithms underlying Bitcoin make their first appearance in the work of Diophantus and that they form an Abelian group was known at the time of Poincaré. The use of elliptic curves in cryptography was first proposed in 1985, independently by Neal Koblitz and Victor S. Miller, but only in the 2000s did their use become widespread. The situation is similar for graph theory and signal processing. Impact often takes time.

The report states that

We are often able to predict that a mathematical breakthrough will be important – but not always. G.H. Hardy, for example, famously boasted in his ‘A Mathematician’s Apology’ of the uselessness of his great love, number theory. Seventy years later, number theory lies at the heart of internet and e-commerce security, fundamental to the functioning of the world economy and of worldwide communications.

Two comments jump to mind. First, we may be able to predict the usefulness of a breakthrough once it has happened, but the research grant-oriented landscape we all live in require us to predict the usefulness of *future* breakthroughs. There our track record is much worse. Breakthroughs often happen serendipitously without much planning or anticipation and they certainly don’t come with a pre-written ‘Pathways to Impact’ statement as required by EPSRC.

Second, Hardy’s views on the usefulness of mathematics are often misrepresented. Hardy did not so much boast of the uselessness of number theory as take solace in it. Hardy was well aware that some mathematics is useful or impactful. (All following quotes are from ‘A Mathematician’s Aplogy’)

Now some mathematics is certainly useful in this way; the engineers could not do their job without a fair working knowledge of mathematics, and mathematics is beginning to find applications even in physiology. —Hardy §19

But then he drew the conscious decision that this is not the mathematics that he himself is interested in. For Hardy the pursuit of mathematics is an aesthetic pursuit, mathematics is to be judged by its beauty and depth. Interestingly, Hardy also anticipated the notion of impactful mathematics and that it differs from both pure and applied mathematics.

There is another misconception against which we must guard. It is quite natural to suppose that there is a great difference in utility between ‘pure’ and ‘applied’ mathematics. This is a delusion: there is a sharp distinction between the two kinds of mathematics, […], but it hardly affects their utility. —Hardy §22

While the Bond report gives examples of pure mathematics that has found impact, Hardy on the other hand gives examples of applied mathematics that—in his time at least—has no usefulness.

I count Maxwell and Einstein, Eddington and Dirac, among ‘real’ mathematicians. The great modern achievements of applied mathematics have been in relativity and quantum mechanics, and these subjects are, at present at any rate, almost as ‘useless’ as the theory of numbers. —Hardy §25

Hardy is also aware that his views might well be swept away by the tides of time.

It is the dull and elementary parts of applied mathematics, as it is the dull and elementary parts of pure mathematics, that work for good or ill. Time may change all this. No one foresaw the applications of matrices and groups and other purely mathematical theories to modern physics, and it may be that some of the ‘highbrow’ applied mathematics will become ‘useful’ in as unexpected a way; but the evidence so far points to the conclusion that, in one subject as in the other, it is what is commonplace and dull that counts for practical life. —Hardy §25

Hardy certainly did not boast about the ‘uselesness’ of number theory. In fact he wrote the exact opposite.

But here I must deal with a misconception. It is sometimes suggested that pure mathematicians glory in the uselessness of their work, and make it a boast that it has no practical applications. […] If the theory of numbers could be employed for any practical and obviously honourable purpose, if it could be turned directly to the furtherance of human happiness or the relief of human suffering, as physiology and even chemistry can, then surely neither Gauss nor any other mathematician would have been so foolish as to decry or regret such applications. —Hardy §21

And now we come to the difficult part: one can apply mathematics for good as well as for evil. Rockets that brought man to the moon also enable man to deliver a nuclear warhead anywhere in the world. The technology that enables Facebook to automatically tag people in photos also enables police to automatically identify people on CCTV. And so Hardy continues:

But science works for evil as well as for good (and particularly, of course, in time of war); and both Gauss and less mathematicians may be justified in rejoicing that there is one science at any rate, and that their own, whose very remoteness from ordinary human activities should keep it gentle and clean. —Hardy §21

Today mathematics has found many applications and with the rise of artificial intelligence and machine learning there will certainly be many more. We are living in a time where mathematics can be used for both good and evil in our everyday life. Cathy O’Neil recently wrote a book, ‘Weapons of Math Destruction’ highlighting the potential of mathematics to cause harm if employed without care and reflection. Mathematics has certainly lost the innocence and harmlessness it still enjoyed in Hardy’s time.

]]>

I learned two things during this day: First, every department is struggling to adapt to students that are less prepared for a mathematics degree than the department is used to teaching. Second, the introduction of subject-level TEF evaluations in 2019/20 is going to be a really big deal.

The issue of adaptation is an interesting one. It is not specific to a class of universities. Universities that lowered entry requirements in recent years from ABB to BBB have to rethink what they are teaching and how they are teaching it. Sometimes student engagement is a problem, sometimes material has to be moved from year 1 to year 2. But also Russell group universities are finding that teaching methods and assessment structures that worked in the past have become less effective.

I came away from the day with the feeling that to some extent everyone is struggling. As lecturers and professors we all know mathematics and we all want to impart this knowledge to the next generation. But certainty seems to be draining away. People are becoming unsure what to teach, who to teach it to, how to teach it and what the purpose of the teaching is. Few students who study mathematics will become mathematicians; for many a mathematics degree is a prerequisite to getting a job. Independent of the admission standards there is pressure in every department to keep dropout rates low. And then there are NSS scores, which often enough come to not only measure but define the quality of teaching.

This, ultimately, is the environment in which teaching happens and in which decisions are being made what to teach and how teach it. It is also an environment that is alien to mathematics itself. And so uncertainty creeps in. If students are not learning mathematics to become mathematicians, what should we teach them? Is the $\epsilon$-$\delta$-notion of convergence really necessary for a job in X? What about Galois theory? Galois theory may have provided the proof that one cannot square the circle and become the foundation of modern algebra, but is it not too difficult for an undergraduate? If a department is measured by its dropout rate and its NSS scores, maybe we can ease the students’ workload a bit; sacrifice a bit of rigour to gain a bit of happiness?

How should we teach mathematics? There were lively debates at the Education Day on this subject. Are lectures a thing of the past? Should we abandon lectures for more active modes of learning? The paper by Freeman et al. was quoted several times. The paper is a meta-analysis of studies comparing traditional lectures with “active learning” methods and comes out strongly in favour of active learning. The department in Edinburgh uses for all first and most second year teaching the flipped classroom methodology. Other departments have not gone this far, but have made steps in the same direction. The question should not be: lecture or active learning? The question should be, where is the right balance between lecturing and active learning. Maybe not the flipped classroom, but just a tilted one. Which is incidentally the title of a brilliant article by Lara Alcock.

]]>

Why should this be a problem? I will talk about mathematics, because this is the field I know best and I can only guess how much the following applies to other fields. First, there are essential activities that are not captured by markers of esteem. The most important is the reviewing of papers. Ideally, a paper that is submitted to a journal is reviewed by one or two other mathematicians, who read the paper in detail and check that the proofs are correct. Reading a mathematical paper is hard work and takes time and each hour spent reviewing a paper is time not spent writing your own papers. And we see that often enough reviewers do not take the time to read a paper. And this means that in the prevailing publish-or-perish atmosphere we spend less time polishing and proofreading our own papers and also less time reviewing papers written by others. In consequence I claim, with no evidence apart from anecdotal evidence, that the overall quality of research papers is diminishing.

Second, the hunt for esteem leads to the search for the magical creature called *the **least publishable unit*. In the beginning scientists are motivated by the pursuit of knowledge by the desire to answer questions whose answers are not known. What happens when the questions turn out to be difficult? This is when mathematics becomes interesting, this is where research becomes exciting. But it also means that I am spending time “unproductively”, because I am not writing a paper. Half a year spent working on a problem is half a year not spent writing papers. And so it can be tempting to chip of a small subproblem that I can solve and write a paper about it. And then perhaps chip off another subproblem. And if after some chipping the main problem is still too big, there are always other chippable problems to be found.

Third, measuring mathematics in terms of esteem means that when discussing other mathematicians we stop asking the questions: What is he or she researching? What result has he or she proven? Instead we are asking the other kind of question: How many papers in the Annals of Mathematics or Inventiones Mathematicae have they published? How many NSF or EPSRC grants do they have? It is because the latter kind of questions are easier to answer. They don’t require us to think about actual mathematics or to make judgements about whether a given subdiscipline is important or what the point of a theorem is. It even gives us the illusion that we can compare someone working on analysis of PDEs with someone doing algebraic topology without having to know much about either area.

Having said this, how robust is the scientific process if we treat science as a sport instead of pursuing it to increase our knowledge? It is a difficult question, because we all are pushed in this direction to some extent. In practice academic hiring and promotion is tied to markers of esteem: citations, publications and grants. And so the more appropriate question is: How much should we swim against the tide? How much time do we spend doing what is important for the community, for students and for mathematics but will not be measured in numbers? This encompasses many things: writing research monographs, developing high quality teaching materials, reading other research papers in detail. I don’t have an answer to this question, but there are hints—studies in psychology that cannot be reproduced, or in debates about foundational work in symplectic geometry—that point to cracks in the facade of science.

]]>

Two things have changed: First, professionally MOOCs are “the competition”, they provide higher education at a fraction of the cost of a university. This is particularly true in England, where the cost of a university degree is £9,000 per year and rising. Even a distance learning institution such as the Open University is significantly more expensive than most MOOCs. So I wanted to experience how a MOOC feels like for learners. To see how they use technology, how they pace videos and design programming exercises and what we as lecturers can learn from them. Second, academics in the UK were striking for 14 days to avoid steep cuts to their pensions and hence I found myself with spare time on my hands.

Thus I decided to dive into the MOOC experience. After some research, I chose the grandfather of MOOCs, Andrew Ng’s Coursera course on machine learning.

The course lasts 11 weeks and each week usually covers one specific algorithm. The course starts with linear regression, continues with logistic regression and neural networks, covers support vector machines, K-means and PCA and finally talks about optimizing large scale problems via batch and stochastic gradient descent. Each week has about 1.5 to 2 hours of video content of Andrew Ng explaining the mathematics behind the algorithm of the week. Then there is a multiple choice test and finally a programming exercise that is submitted and automatically graded online.

• Andrew Ng succeeds in presenting a large amount of mathematics with a minimum of mathematical prerequisites. As mathematicians we tend to think that to understand a topic we have to understand it down to the last detail. It is ingrained in how we were taught. In the mathematics curriculum analysis and the epsilon-delta notion of convergence is seen as the bedrock upon which calculus is built. In this course however Andrew Ng explains and uses the gradient algorithm without assuming any knowledge of calculus! Seeing how this is done is fascinating.

• Each lecture is followed by a multiple choice quiz which tests understanding of the material. I really liked the questions. They take the material of the lectures and then go just a little bit further to check whether we actually thought about what Andrew Ng said in the video.

• In the programming exercises we are implementing all the basic machine learning algorithms in Matlab (or Octave) from scratch: linear and logistic regression, k-means clustering, neural networks. For this to work a lot of scaffolding is provided by the course. One can argue that creating the scaffolding—helper functions to read in data, visualise the results, etc.—is at least as challenging as coding the core of the algorithm. But of course too much freedom makes it impossible to automatically grade the programming exercises.

• Machine learning. I had seen some of the algorithms before, but seeing a systematic exposition of the material was helpful. However, particularly useful were the nuggets of wisdom on the practical aspects: how to divide a dataset into training and test sets, how to go about optimising learning rates and other hyper-parameters, how to diagnose bias and variance. All the things that are needed to make theory work.

• Focus. Everything in the course is focused towards a goal. The course starts with basics: cost functions, nonlinear optimisation and gradient descent, but everything is introduced only to the extent that it is strictly necessary: maximum likelihood is mentioned only once in passing, and is convexity; conjugate gradient method is used in programming exercises but only as an optimisation black box. In a 50 minute lecture there is the temptation to wander and explore side avenues because they are interesting or because they provide “a more complete picture”, but if the lecture is split into 10 minute long videos, focus becomes essential.

All in all it was both an enjoyable and instructive experience. I am looking forward to continuing with the spiritual successor, Andrew Ng’s second MOOC on deep learning.

]]>In the end I settled on showing how to evaluate

There are many proofs for this identity—fourteen have been been collected by Robin Chapman—and it is often done as an application of the theory of Fourier series. One of the proofs, however, uses double integrals: using the geometric series one can show that

Evaluating this integral is an instructive exercise because it confounds many of the assumptions about double integrals students may have: the integration domain is as simple as one might hope, yet the right approach is to perform a change of coordinates; the resulting domain can be parametrized as one piece, yet it is better to split the domain in two; non-trivial trigonometric identities are used to simplify the integrands.

Whether or not the students were entertained remains unknown, but as a result I wrote some notes explaining the calculation in great, perhaps excessive, detail.

]]>

There are genuine arguments for controlled immigration and “taking back cotrol of the border”. There is also merit in finding and deporting those who are in the country illegally. But what can possibly be the point of making life hell for those who have done nothing wrong? What purpose is served by casting the net of suspicion so wide that it causes collateral damage?

Unfortunately, this is not an isolated incident. There is the “unfortunate error” made by the Home Office when it sent out up to 100 letters to EU nationals ordering them to leave the country. There are cases of people who have lived more than 50 years in the UK and came close to being deported because they could not prove that they moved to the UK as children in the 1960s. There are stories of EU nationals applying for permanent residence in the months after Brexit who had their applications rejected because of technicalities. And then there is today’s case of a man being detained for two months and threatened with deportation after reporting a crime.

What does the future hold? Starting from January banks will be helping Theresa May create a “hostile environment” for illegal immigrants by carrying out immigration checks on their customers. It would be a miracle if these checks don’t generate false positives—legal immigrants who are mistakenly flagged up by the system, whose bank accounts are closed or threatened with closure. There is the general uncertainty about the rights and status of EU citizens in the UK after Brexit and what process they will have to follow in order to continue living their lives.

And while one part of the country seems intent to make life as hard as possible for immigrants another part seems unable to do without them. Proposals for a “barista visa” have been floated and farms complain already about a shortage of migrant workers.

I would like to look optimistically into the future, but after hearing Philip Hammond admit that the cabinet has not yet discussed the government’s preferred “end state position” after Brexit, it is hard to shake off doubts.

]]>The document below contains some of these nuggets, written in a form that may be useful to students writing a bachelor thesis or final year project in mathematics or some other technical subject. I am grateful to my colleagues who read it and helped improve it and to my students who provided me with the necessary experience to write it.

]]>

Nothing, that is, except for the installation of an open source, custom Android distribution. One can of course debate the necessity of this move and whether it is not just a waste of time. I will not do so, except for saying that I like to feel that it is I who is in control of my phone and not Sony, Google or someone else. I like to start with a clean phone and not one that has five Sony-made apps preinstalled “for my convenience”. Here, for example, is an explanation, why Android does not allow one to control apps’ internet access by default.

Having gone through the installation process three years ago with my old phone, I thought two or three hours should be plenty of time to unlock the bootloader, install a fresh Android and get on with life. Oh, how naive we sometimes are…

My Android of choice was LineageOS, the spiritual successor of CyanogenMod. The process started out reasonably clearly. First, one needs to unlock the bootloader. When requested, Sony does provide the unlock key together with warnings that unlocking the phone voids the warranty, is dangerous and that from now on I am on my own. To demonstrate the general user-friendliness of it all, let me mention that the unlocking itself is done via the intuitive command

> fastboot -i 0xfce oem unlock 0xUNLOCK_CODE

while the phone is in bootloader mode and connected to the computer via USB.

Unlocking the phone wipes its contents, a lesson I had forgotten since last time and had to relearn by experience. With the phone unlocked, one then installs a recovery environment, in this case TWRP, and then uses it to install LineageOS. Finding the right version of TWRP for my phone was the first challenge. The latest version, 3.1.1., simply did not want to work and the only version that did was 2.8.7. Fine, so be it. With TWRP installed and running, I followed the instructions, wiped the phone, loaded the LineageOS installer and pressed install.

However, instead of installing the process stopped with some error messages. Some googling seemed to suggest that I was using a too old version of TWRP to install a too new version of Android (7.1). So, back to installing the recovery. Some more googling seemed to suggest that my current firmware was too old to support the newer version of TWRP… So, I restore the system from a backup and go about updating the firmware. This introduces me to tools like FlashTool and XperiFirm, which download the up to date firmware and package it for installation. The packaging process turned out to take about two hours.

By now it is 4am when I press the button to flash the new firmware and this is when things get interesting. The installation process aborts with an error. Apparently, I forgot to check a box in the settings, allowing software from “unknown sources” to be installed. And now I have a phone that does not boot, so checking the box is not an option any more; nor does it boot into recovery, so restoring from backup is not possible either. I decide to call it a night and go to sleep.

The following day, I realize that I will not get to a working phone with Linux alone. So I boot into Windows, install the official Sony firmware update tool, restore the phone and start again: new firmware with all boxes ticked, then latest version of TWRP, then finally the installation of LineageOS and the Google Apps bundle. About three hours and it is done!

What have I learned? Nowadays, installing Ubuntu on a computer is simple and straight forward. Installing a custom Android distribution, on the other hand, is an endeavour requiring technical skills, dedication and lots of googling. All in all, it took me about eight hours. Information is found in websites of varying trustworthiness. There is a new language to be learned, vocabulary to be absorbed.

Reinstalling the system means rebooting the phone variously into recovery, bootloader or normal mode, each requiring different buttons to be pressed at different times. When rebooting into recovery mode fails, one is then left with the mystery: is it because the wrong buttons were pressed or because the installation of the recovery environment failed?

There is also a general lack of certainty, whether some piece of information applies to my given situation. There is no uncertainty in the instructions. They are always very certain. But do they apply to my phone with Android Nougat or just to Kitkat? Is the firmware version important? The version of FlashTool I use to package the firmware? And what are the consequences of a mismatch between phone and instructions? Can one really brick a phone beyond repair?

Such a steep learning curve, almost a learning cliff, will deter most people from engaging with their phones below surface. Once there was Windows Xp, where it was much easier to reach below the surface. It was not difficult to access the registry, mess around with drivers and network settings. In fact, troubleshooting was part of daily life. Now the surface seems much more polished and much more difficult to penetrate. Getting “under the hood” is more difficult and less encouraged. This is a pity.

Was it worth it? Probably.

]]>Mathematics helps us to…

- … find precise answers to precise questions.
- … find approximate answers to vague questions.
- … interpret answers that seem precise but are not.
- … figure out the right questions to ask.

Below I have tried to develop these points and to supplement them with examples. The text below is addressed at first year students of mathematics at Brunel University.

This is what one usually associates with mathematics and it is true that mathematics provides tools that can be used to find precise answers to sufficiently precisely formulated questions.

For example, a question that might be of interest to you is, how much money will you owe to the Student Loans Company by the time you graduate? Assuming average earnings, how long will it take you to pay back the loans and what will be the total amount you paid for the university education? Mathematically, the answer requires little more than an understanding of compound interest and geometric series. One also has to read the rules for student loans: how are interest and repayment rates calculated based on the salary. You will explore this question in more detail in group projects this year.

More complicated questions that never the less require precise answers arise in space flight. How do we navigate a spacecraft to a comet flying through space and land a robot onto its surface? In 2016, the Rosetta mission successfully landed a spacecraft on a comet 4 km in diameter that is 780 million kilometers away. The necessary precision could be compared to throwing a baseball from London to Tehran and hitting a particular window. As many comparisons go, this one is not entirely accurate.

Planets, comets and spacecraft move according to Newton’s law of gravity: Two objects attract each other with a force that is proportional to the product of their masses and decreases with the square of the distance. From this one can derive the equations governing the motion of the sun, earth, moon and other celestial objects. If we can compute sufficiently precise solutions then we can plan the trajectory of our spacecraft. The recent book and film Hidden Figures is in part about solving these equations before computers became available.

Consider the question:

How many bricks are there in London?

This question is different in nature. We don’t want to know the exact number. In fact, no one knows exactly how many bricks there are in London. The actual answer is unknown and unknowable. But we can estimate, how big the answer is, how many zeros the number contains. Mathematical reasoning provides a tool to arrive at this rough approximation. Mathematics is not always about precision in the sense of precise numbers. Mathematics is about precise thinking, even when the numbers themselves are imprecise. Using approximations is fine as long as we are aware of what we are doing.

The technique used to approach questions such as this is called Fermi estimation. We can arrive at an answer by estimating how many bricks are necessary to build houses for all inhabitants of London to live in.

- There are about 10 million people living in London
- The average household size in the UK is 2.4 people, meaning that there are about 4 million households in London. We are not too bothered with precise numbers here. Since our estimates are only approximate, rounding them will not do much harm either.
- An average two bedroom flat has about 80 square meters. Let us assume our flats are square, divided into four equally sized rooms. The total length of walls of such a flat would be meters. If we replace 80 by 81, this becomes 54 meters of walls.
- How high are ceilings? Let us say, they are 2.5m high. This is generous as was the 80 square meter estimate, but we need to compensate for the fact that we only take residential buildings into account. This would give around square meters of wall per household.

– One brick is , meaning it has an area of . Assuming two rows of bricks per wall this comes to 200 bricks per square meter of wall.

– Multiplying 4 million households with of wall with 200 bricks gives us 100 billion bricks in all of London.

We have arrived at an approximate answer. Does this answer make sense? At first glance it does. The number is big but not too big. We can try to check our answer by estimating the number of bricks in a different way.

- According to Statista the following number of bricks (in billions) was produced in the UK in the last couple of years.

2012 2013 2014 2015 1.459 1.555 1.824 1.919 - London has about 1/8th of the population of the UK, so let us assume that 1/8th of all bricks was used in London. If, on average, 1.6 billion bricks were produced in a year, then 200 million bricks were used in London each year.
- Assume that houses are up to 100 years old and that brick production has not changed in the last century. Then London would contain 20 billion bricks.

We see that we have arrived at a different answer, but within the same order of magnitude. For our purposes we call this a success.

A weather forecast predicts not only temperature, wind speed and direction, but also the chance that it will rain at a given time in a given place. The statement

“There is a 60% chance that it will rain in Uxbridge tomorrow between 1-2pm.”

seems clear at first glance, but what is really meant by it? A couple of possible interpretations come to mind.

- If 100 people are in Uxbridge during that time, 60 will get wet.
- During the hour there will be 24 minutes of sunshine and 36 minutes of rain.
- The number 60% is what the weather forecasting model “believes” about the future.

In the context of a weather forecast for Uxbridge and a time interval of one hour, answers (1) and (2) sound absurd. They make more sense if the forecast is meant for the next 24 hours and all of South England. Nowadays, however, we have smartphones with GPS receivers so the weather app knows exactly where we are and can provide a supposedly personalised forecast.

What are we to make of answer (3)? This answer is at the same time more honest and less helpful. Really understanding what it means would require one to look at the details of the model, to inspect the equations that are used to calculate the chance and to consider the model’s past predictions and how they compared with reality. Doing all this requires mathematics. See the post on mathbabe for suggestions how to measure the accuracy of such a forecast.

To ask the right question is harder than to answer it. —Georg Cantor

Sometimes we need mathematics to start asking the right questions. Consider the problem of predicting stock prices. If we plot the daily price of a stock over a time span of several years, the result looks erratic. There is a lot of up and down, sometimes more up than down, sometimes more down than up. There is noise in the short-term and perhaps some discernible trend in the long-term. Sometimes there are sudden movements while at other times the stock price makes an excursion only to return after some time to where it started.

How do we make sense of this? The question “What will be the stock price tomorrow?” is not very helpful. The price of a stock is an estimate of the value of a company which is affected by the performance of the company, current events and public opinion. To make accurate prediction a crystal ball is indispensable. We have more hope answering other questions: Can we detect trends? Can we identify clusters of similarly behaving stocks? Can we distinguish between “risky” and “safe” stocks? What do these terms even mean? For this mathematicians working in universities, banks, hedge funds and financial companies have developed tools: these have names you have heard like average, mean, variation, correlation and more complicated ones like volatility, regression, multi-factor model. You will see some of these in the modules on probability theory and statistics and others in more specialized modules on financial mathematics.

A different question, one where the role of mathematics is less obvious, is the ranking of search results. If you enter a search term such as “running” into Google, you are presented with a list of pages that have to do with running. Imagine yourself having to design the algorithm in the background: How do you choose which page to put at the top? As humans we make intuitive judgements about importance and relevance every day. Given twenty links to websites about running, we would be able to sort them according to what we perceive as relevance. Of course, if 50 of us did this exercise, we would end up with 50 different rankings. For the whole internet this approach is impractical. It is worth remembering that about 20 years ago, however, there were websites dedicated to providing curated web directories—lists of links to web pages sorted by categories, like a books in a library. Yahoo started out as such a website. These directories were useful, because no one had figured out yet how to search and sort the internet automatically. Search engines did exist, but their results were not always very helpful.

Google’s rise to become one of the dominant internet companies began with search. More precisely, with an algorithm to automatically determine the relevance of web pages. The algorithm, called PageRank, recasts the problem in the language of graphs or networks. Each web page is represented by a node in the network and we create a link from one node to another, if there is a hyperlink between the two web pages. Then we imagine placing a bucket of water at each node and letting it flow through the links of the network. Nodes with no or few incoming links will receive no or little water, while its initial water will flow away through any outgoing links; these nodes are the unimportant ones. On the other hand, water will accumulate at nodes with many incoming links; these will be the important nodes in the network. By observing the amount of water in each node over time we can get a sense of what the network thinks about the importance of each node. Mathematically this can be expressed using the language of linear algebra. You will explore the PageRank algorithm in more detail in projects later in the year.

We have seen how PageRank recasts the problem of ranking search results in the language of mathematics. While doing so, we must not forget that the problem we started with has no unique solution. There is no “right” or “wrong” order of ranking web pages. If I search for web pages about “running” I might be looking for running clubs, running magazines, tips how to start running, running shoes or pictures of people running. Mathematics can help us think about the problem more clearly, see what is important and gain insight into the problem, but it does not answer the question “Which web page is most relevant for running?”

]]>One can look at these and other numbers and make mathematical statements: What is the average time it will take a graduate to repay the loan? How to model the average value of a university degree compared to the hopefully higher salary of a graduate with the loan owed to the Student Loans Company. We ask our first-year students at Brunel to create a simplified model of their projected income and loan repayment. It is usually an educational experience for them.

Putting numbers aside, it is important not to forget the psychological effects of starting the adult life with an outstanding loan of this size. The Guardian recently reported about a graduate finding herself in a bureaucratic nightmare when the Student Loans Company mistakenly increased her interest rate for taking a trip abroad. A follow-up story a week later showed that her case was far from unique and that administrative mistakes have made life difficult for many graduates. Kafka remains as relevant today as ever.

Taking out a student loan, a necessary step for many to obtain a university education in England, means ceding freedom and independence to the Student Loans Company that goes beyond taking on a financial obligation. The relationship between the graduate and the Student Loans Company will, in many cases, last the full 30 years until the loan is forgiven. In this time they will amass a wealth of information about each graduate that is matched only by HMRC and the police. However, the Student Loans Company is not the government but rather a company owned by it, which is a convenient sleight of hand.

For example, each trip abroad lasting more than three months has to be reported, together with “evidence”, which includes travel itineraries and bank statements for those going on longer trips. The Student Loans Company is not satisfied with collecting repayments from those earning more than the set threshold, it requests watertight documentation of everyone’s financial situation. Being under such scrutinity for 30 years is bound to leave traces.

What else? The conditions on student loans can be changed retroactively. This happened already when the repayment threshold was frozen in 2015 at £21,000 for five years rather than being uprated annually in line with average earnings. Who is to say that the interest rate will not be changed? Or perhaps the time until the loan is forgiven? And then there is the unfairness of the year-of-birth lottery: those who started university before 2012 have won and paid the lower fees while those who started in 2012 and later lost and had to deal with £9,000 a year.

]]>