After you. No, really.
No matter where you live or who you are, if you’ve attended school, been on a sports team, played music, or made any kind of presentation ever, there is a core memory that we all share: being chosen to go first. That sinking feeling you get when you stand up and step into the unknown and start speaking, or playing, or demonstrating, or whatever, and all eyes are on you – because you’re the first one. The quiet terror that seeps in as people begin to assess you, the judges peering at you, the knowledge that everyone else has more time to prepare than you…It stays with you forever.
But emotions aside, does it really matter? As long as you are prepared, confident and try your best – your place in the queue is of no consequence, right? Sadly not. It seems when you go is quite significant and – spoiler alert – going first often means you are judged more harshly or receive less credit for your efforts, especially when the selection criteria are stringent and when the evaluators are unfamiliar with the quality of the candidates.
Dr Jiang Bian and Yanbo Wang of the University of Hong Kong, along with two other co-authors, dug deeper into the science of this disagreeable fact using a novel type of research design. Their paper “Good to Go First?” looked at the merits and demerits of “the position effect”, that is, being evaluated at a specific time.
Good to go first? Or not?
During their background research, the authors initially found conflicting evidence as to whether the order in which proposals or performances were evaluated had an effect on the result. In some cases, like with business proposals or investment evaluations, the first presentation was more likely to be approved; also, elections can have “ballot order effect”, whereby people listed first on the ballot win more votes. This is due to the primacy effect, the “tendency for facts, impressions, or items that are presented first to be better learned or remembered than material presented later in the sequence”.
But they also found that in other types of presentations, music performances for instance, or athletic competitions like swimming or ice skating, being evaluated later created an advantage, with one study finding that “figure skaters who perform[ed] later in the first round receive[d] better scores in the first and in the second round” and placed higher overall.
These inconsistencies are due to “thorny inferential challenges”. There is the fact that in situations that allow for self-selection, people who want to go first often do go first, perhaps reflecting their preparation and confidence, which ultimately impacts their results. Also, “the order in which a performance is rendered can have an impact on performance itself”, meaning it’s possible that being randomly selected to go first may induce performance anxiety and thus cause a worse performance – nightmare! The position effect can also affect the perception of evaluators: even if a competition randomly selects performance order, if those evaluating it are more likely to favour or disfavour earlier or later performers…well, we have a problem.
Enter the Innofund
To mitigate these causal identification challenges, the authors chose to study a unique competition: the “Beijing Municipal Innovation Fund for Technology-Based Medium and Small-Size Enterprises”, or Beijing Innofund. The fund’s aim is to promote the success of early-stage technology venture companies by awarding financing to the most promising ones. Each winning company receives a grant of between RMB200,000 to RMB1 million, as determined by the strength of their proposal, with the money helping them “cross the ‘valley of death’” – the precarious time between when a company launches a product and when it begins to generate revenue. An added benefit is that receiving an Innofund grant confers status on the winners, potentially allowing them to secure funding from other sources.
Because the size and prestige of the Beijing Innofund makes it highly competitive and “economically meaningful” for the winners, it attracts thousands of proposals a year. The Innofund’s data set proved to be ideal for other reasons too: grants are judged by individual evaluators working alone, avoiding any peer influence; the evaluations are ranked and funds are allocated based on these rankings, rather than a zero-sum “win or lose” award; marks are recorded after each proposal is reviewed; and the evaluations are based entirely on written materials, with no presentation component – removing any performance anxiety and allowing the authors to see exactly what the evaluators saw.
The authors looked at how the Beijing Innofund’s expert grant evaluators examined almost 3,000 grant proposals as a function of “the position in which they were assigned to evaluate each proposal”. The authors argued that because people assessing serious proposals such as grants “strive to make consistent, reliable and valid evaluations”, in situations where only a small proportion of resource seekers will be selected for funding, grant evaluators generally act conservatively and avoid making “extreme assessments” in their first evaluations by giving out lower marks at the start.
They also proposed that prior experience plays a role: “newbie” evaluators, who have never judged a particular type of competition, tend to rely on a high but abstract “evaluative baseline of quality” at first, while they “calibrate their standards” – especially when the competition is prestigious and selective. This again results in lower scores for those who go first. Evaluators who do have competition-specific experience tend to be more consistently even-handed with their scoring as they have solid understanding of the quality distribution of the resource seekers applying to a specific grant programme.
To test this calibration effect, the authors focused on two types of evaluators: those with professional venture capital (VC) investment experience and those who had judged prior Beijing Innofund competitions. Their hypothesis was that judges with a VC background would have approved higher quality companies in the course of their work, and thus be more critical of the start-ups applying for the Innofund; while the prior Innofund judges, with their specific experience and knowledge about the quality distribution of the grant seekers, would be less likely to be critical of first proposals.
The authors then examined the judging data from the 2016 and 2017 Beijing Innofund competitions. The data set consisted of 400 evaluators and 2,938 grant proposals, with a mean of just over 30 proposals per evaluator. This meant they could accurately measure the “penalty of being first”, since most proposals would not be the evaluator’s first one and since each proposal was evaluated by multiple experts.
Since the Beijing Innofund’s reputation relies on integrity, it uses proprietary software which randomly matches a given proposal with five evaluators in three different roles: one start-up mentor, two technology experts and two finance experts. This added rigour to the study, since a selected proposal is semi-randomly assigned a different priority to each expert – meaning each of the evaluators has no idea what any of the others are looking at. Additional safeguards include performing all evaluations at an off-site hotel, evaluating solely by computer, forbidding all communications during evaluations, and even having anti-corruption officials circulate to ensure adherence to all rules.
After evaluation, the proposals are given a score of between 0 and 100 by each expert, with the final results automatically added up and submitted. The proposal with the highest score receives funding first, the runner-up second, and so on until the fund’s annual budget is finished. For this experiment, the authors included several dummy variables that allowed them to see if a proposal was the reviewer’s first, last or almost first or last. They also created dummy variables to test the evaluators’ professional backgrounds – to determine whether they had VC experience or had been a prior Innofund judge. Since the proposals were evaluated by multiple reviewers, the authors also used a “firm-level fixed effects” model to estimate the position effect in evaluation while holding constant a proposal’s quality.
Bad news for eager beavers
So, after such an intricate set-up and design – what did they find? Unfortunately, the news is still bad for those who like to go first. They concluded without a doubt that “first proposals are evaluated substantially less favourably than those presented thereafter”. With all control variables accounted for and all the statistics checked and rechecked, projects that were evaluated first received overall marks that were over 8% lower than projects evaluated later. Also, as anticipated, the experts with a VC background assigned lower marks across the board, and particularly for their first evaluations – regardless of the quality of their proposal.
And that’s actually quite serious. As the authors note, during “the allocation of funds by VCs, government science grants and corporate [research and development] awards…invariably some proposals must go first or last”. Their experiment concluded that “an applicant that is evaluated first needs [to score] in the top 10th percentile to merely equal the evaluation of an applicant in the bottom 10th percentile that is not evaluated first”! This has consequences for both the proposal applicant – who likely misses out on critical funding and a reputational boost; and the grant awarder – who likely misses out on a “potentially favourable investment or candidate”.
This is important information with no simple fix, though the authors have some suggestions. One strategy they found, through research, is to only enter scores after all proposals have been evaluated, but this creates other problems like recall bias. Three intriguing suggestions they offer are 1. Include “placebo proposals” at the start of competitions, which could include random proposals from previous years that will help the judges calibrate their responses; 2. Allow judges to look at several proposals “off the record” to give them an idea of the quality of proposals before they officially begin evaluating; and 3. Make “post-hoc adjustments” in which the software automatically adjusts scores and removes the “first position penalty.”
Regardless, organisations need to take this fact into account to avoid adding injury to insult – as we know, for most of us, it is hard enough to go first, let alone be penalised for it.
About this Research
Jiang Bian, Jason Greenberg, Jizhen Li, Yanbo Wang (2021). Good to Go First? Position Effects in Expert Evaluation of Early-Stage Ventures. Management Science 68(1):300-315.
References
Ballot order effects. (2022, April 20). MIT Election Data + Science Lab. Retrieved April 14, 2023, from https://electionlab.mit.edu/research/ballot-order-effects.
Brooks, A.W., Huang L., Kearney, S.W., Murray, F.E., (2014). “Investors prefer entrepreneurial ventures pitched by attractive men”. Proc. Natl. Acad. Sci. USA 111(12):4427–4431.
Bruine de Bruin, W., (2006). “Save the last dance II: Unwanted serial position effects in figure skating judgments”. Acta Psych. 123:299–311.
Clingingsmith D., Shane S., (2017). “Let others go first: How proposal order affects investor interest in elevator proposals”. Preprint, submitted December 13, 2017. https://osf.io/preprints/socarxiv/6rbyx/.
Greenberg J., (2021). “Social Network Positions, Peer Effects, and Evaluation Updating: An Experimental Test in the Entrepreneurial Context”. Organizational Science. Vol. 32, No. 5. https://doi.org/10.1287/orsc.2020.1416.
Luo, H., (2014). “When to sell your idea: Theory and evidence from the movie industry”. Management Sci. 60(12):3067–3086.
Primacy effect definition. American Psychological Association Dictionary of Psychology. Retrieved April 17, 2023 from https://dictionary.apa.org/primacy-effects.
Unkelbach, C., Memmert, D., (2014). “Serial-position effects in evaluative judgments”. Current Directions Psych. Sci. 23(3): 195–200.
Wang, Y., Li, J., Furman, J.L., (2017). “Firm performance and state innovation funding: Evidence from China’s Innofund program”. Res. Policy 46(6):1142–1161.