Could Peer Review Spot Bad Fiction Books?

If the coronavirus pandemic has taught us anything, it’s that science rules and partisan politics drool. Think about the progress achieved in the wake of such a major setback. Covid-19 is a disease that spreads exponentially, at a rate that people can’t imagine. The infections were enormous and rapid. Pandemics of this nature have been the biggest killers in all of human history. The Spanish Flu of 1918 wrought more death than the still active World War I.

Yet a little over a year after the first cases were discovered, multiple effective vaccines have been safely tested and rolled out. For reference, smallpox was eradicated in the 1970s after several thousands of years of destruction, death, and the collapsing of one Roman Empire (RIP).

What caused such progress in the battle against a new worldwide epidemic? Many factors in medicine, biology, genomics, and so on, but all of the knowledge rested on empirical principles a.k.a. science. Empirical science can stake a claim to being among humanity’s best creations, but what makes it so good and what lessons can we use for for fiction?

Spotlight On Peers

The complexity of how science makes progress possible could fill several books, but I want to direct your attention to one aspect baked into the academic process: peer review. For those unfamiliar, peer review is a self-regulating, quality-assurance filter that scholarly papers need to pass through before being published. It’s importance in the sciences is quite paramount.

Let’s say a research team performs an experiment and writes it up in an article for a journal. After they submit it (assuming the paper’s editor doesn’t reject it out of hand), the journal’s editor invites experts of similar knowledge and qualifications to review the paper anonymously. The reviewers looking over the paper have no idea who wrote it and, in turn, the original authors have no idea who is marking them up. (Sometimes this isn’t the case but for the most part, somebody is always in the dark at least some of the time). These peer referees then could suggest revisions, accept the paper as is, or reject it.

This process is still the best available form of quality assurance out there, and not just in academia. In his book Originals, Adam Grant describes an experiment where three groups of people involved with a circus were told to predict the success of a particular act. The creators, the circus managers, and other circus performers were all given this task.

Not surprisingly, creators were way off base regarding the success/failure of a circus act. After all, if creators knew what worked beforehand, this experiment wouldn’t need to exist. Managers did a bit better, but not by much. Their judgements were still excessively cautious as, after all, if the new act bombed, they’d be out a lot of money.

Circus-performing peers, on the other hand, did a much better job at guessing which acts would wow the public. The reason is that other performers not involved with the test act are in a goldilocks zone of judgement. They are experts like the managers, but are not weighed down by a manager’s external considerations. They are just as talented as the creator, but are not burdened by the creator’s feelings about the creative process for that particular act. They’re exposed to the final act with no other information. Because of this combination of expertise and no clear vested interest, they serve as the best indicators of an act’s success. This is what happens in the academic peer review process too.

Reading about this experiment got me wondering if a similar system couldn’t be implemented for new-and-upcoming fiction.

The Current Dilemma

The publisher’s problem has been the same since publishers first crawled into existence: they have no idea which books they publish will sell and which ones won’t. They have to sink some of their own money, an advance, into the author’s pocket before the book is published. The costs are enormous and if it backfires…that’s bad news, man.

However, the publishing situation feels remarkably similar to the circus performer’s in the experiment Grant highlighted. We have the creators, authors, who are just as terrible at picking which of their works will be big hits. How many times have we read about authors who reacted like “Really? That book. I mean, it’s good, but I thought this other one I did was way better.” Their position as creators stops them from being clairvoyant.

But the publishers, the current quality assurance filters, are the same position as the circus managers. Yeah, they might not have the author’s clear vested interest in success (yet) but their conservative in-house accountants can slap a dog-collar on an enthusiastic editor.

If an author can’t see success, and a publisher can’t quite see it, who can? By this logic, the author’s peers, right? Correct, but who exactly would serve as a peer referee for a fiction author?

Use The Imprint

When agents are begging publishers on the author’s behalf, they aren’t approaching publishers exactly, but rather an imprint within a publisher. Publishing houses are partitioned into imprints specialising in particular genres (crime and thriller, romance, general fiction, how-to guides, etc). Authors are signed to imprints within publishers, not publishers as a whole.

And that is where the pool of peer referees can be found: the other authors signed to the imprint the agent approaches.

Just like the other circus performers judging a new act by somebody similar, the authors on an imprint are the most like the proposed author’s peers you could ask for. They have the same publishing team, presumably similar levels of experience and expertise in the new author’s genre, and also have no other considerations weighing their on their minds.

So how would this experiment work?

A Hypothetical Run-Through

Disregarding the number of submissions imprints receive for a moment, let’s imagine that an agent has an unsigned author and believes they know the perfect commissioning editor who would take the book. Over the typical boozy lunch publishing is infamous for, the agent pitches the new novel. The commissioning editor likes it and takes the manuscript not to their colleagues in the imprint, but rather two or three authors already on their list.

As part of their contract, it would be stipulated to these authors they must perform a certain number of peer review works like this per year.

The two or three authors would then review and make comments about the manuscript and then be asked to give their likelihood of the book being a success for the imprint. No more, no less, and the incentives would have to be in place whereby a negative comment wouldn’t adversely affect that peer reviewer.

The peer authors would then return their comments to their commissioning editor. If the comments are negative, the editor could turn the agent down. If the comments are positive, the editor would then, as they do now, put together a ‘vision document’ and present it to their colleagues. The colleagues could, as academic journal editors do after peer review work, still reject the book.

If the book is accepted, the process would play out as it does now in the imprints, but hopefully with a little more well-founded confidence in the process.

What The Experiment Hopes To Achieve

The imprint peer review experiment has several ambitions, and hopes to overcome several challenges in the current acquisitions process. First, the likelihood of spotting winning submissions. New books are slot machines bound between two covers. They offer no guarantees and often have to create their own demand before selling. Obviously the uncertainty can’t be removed all together, but I believe peers will reduce the risk dramatically.

Peer review is the best quality assurance filter out there because peers are the most in-tuned to one another’s strengths and weaknesses. Impressing your editor isn’t fun, but impressing the other authors in your writer’s group is.

Secondly, the peer review experiment also seeks to strengthen an imprint’s confidence in their decision-making. Even the best commissioning editor has their finance team gagging them at times. Additionally, it would make the other factors the imprint needs to worry about seem less important in the short- and long-run. After all, I hear stories of books being rejected for reasons banal as “we have a similar book already on our list” and “the market isn’t wild about it right now”. The issue with both of those so-called reasons is that they don’t consider the book’s objective quality. The peer review process seeks to remedy that situation.

Thirdly, the peer review is also meant to strengthen the authors already signed to the imprint. Assessing the quality of a similarly competent peer could spark new ideas in those authors, sharpen their editorial skills, and establish trust among their fellow authors. Having a large network of authors working on each other’s behalf could help new authors access established authors’ platforms. Pushing the onus of marketing and promoting onto one author is a bad idea, so dividing and conquering could ultimately be more profitable.

Conclusion

In the academic world, the sciences are flourishing like a royal garden while the humanities are struggling like a daffodil in the desert. Science’s standing in the world is leading to breakthroughs, innovations, and unbounded excitement. The arts perform these same duties, but their quality assurance process isn’t as powerful. Artists already employ a peer review process when they ask their fellow artists for ideas or input, so why can’t they just make it part of the process of being a professional artist?