Publishing a style manual, particularly a lengthy, detailed manual that covers a ridiculous amount of technical material (Hello, AMA Manual of Style!), is a grueling process. In our case, it involved 10 people meeting for at least an hour every week for more than a year, where we tried not to get into arguments about grammar, usage, and the presentation of scientific data. After the meetings there would usually be flurries of e-mails about grammar, usage, and the presentation of scientific data. Then we’d all go home and dream about grammar, usage, and the presentation of scientific data. You get the picture.
My point is that the writers of style manuals are often a little, shall we say, too close to the material. In the case of the AMA Manual of Style, we are all editors as well—and it can be hard for us not to roll our eyes when we run into the same problems on manuscript after manuscript. Come on, authors: there’s a whole book on this stuff!
Which, of course, is precisely the problem. There is a whole THOUSAND-PAGE book that tries to encompass all aspects of medical editing. It’s impossible to expect authors to absorb all the information–they’re just trying to get published, and it’s our job to help them. Here, in classic top-10-list reverse order, are the top 10 editorial problems we see in our submitted and accepted manuscripts, compiled by committee and editorialized upon by me. If any authors happen to read this, maybe it will help them avoid the most common errors; if any journal website–design people read it, maybe they can grab some ideas for more explicit user interface; and if any copy editors read it, maybe they can enjoy shaking their heads in wry commiseration.
10. Missing or incomplete author forms. Most journals require authors to fill out some forms, usually involving things like copyright transfer, an assertion of responsibility for authorship, and so on. These forms are often filled out incorrectly or incompletely. Following a form’s instructions as to signatures and boxes to check can save significant amounts of time in the publication process.
9. Not explaining “behind the scenes” stuff. Values in a table don’t add up—oh, it’s because of rounding. The curve in this figure doesn’t connect the values listed in the “Results” section—oh, we used data smoothing. This kind of thing can be easily explained in a footnote, but many authors forget to do so because it seems so obvious to them.
8. Making life difficult for the copy editor. Authors and editors have the same goal: a polished, published, accurate manuscript. Sure-fire ways authors can ruin what should be a pleasant working relationship are to suggest that the copy editor is making changes in the manuscript for no reason; calling the copy editor to discuss changes without having read the edited manuscript first (this wastes oodles of time); and not reading the cover letter that comes with the edited manuscript. This last is particularly charming when the author then calls the copy editor to ask all the questions that are very nicely answered in said cover letter.
7. Common punctuation and style mistakes (not an exhaustive list). Most frequently we see authors fail to expand abbreviations; use different abbreviations for the same term throughout a manuscript; use commas like seasoning instead of like punctuation marks with actual rules of deployment; and overuse the em dash. However, I’d like to tell any authors reading this not to fret, because that’s the kind of stuff we’re paid to fix. Plus I can’t really throw stones—being a fan of the em dash myself.
6. Errors of grandiosity. Sometimes a perfectly nice and valid study will go hog-wild in the conclusion, claiming to be changing the future of scientific inquiry or heralding a sea-change in the treatment of patients everywhere. Or authors will selectively interpret results, focusing on the positive and ignoring the negative or neutral. It’s natural to want to write an elegant conclusion—it’s one of the few places in a scientific manuscript where one can really let loose with the prose—but it’s always better to err on the side of caution.
5. Wacky references. All journals have a reference citation policy, and across scientific journals it is fairly standard to give reference numbers at the point of citation, cite references in numerical order in the text (as opposed to only in tables or figures), and retain a unique number for each reference no matter how many times it’s cited. However, we still get papers with references handled in all kinds of odd ways (alphabetical, chronological, or seemingly inspired by the full moon). References that include URLs can mean big problems. Often the URL doesn’t work or the site is password-protected, subscription-only, or otherwise useless to the reader. Also aggravating: references that are just the result of the search string for the article and not the URL for the article itself.
4. Duplicate submission. In scientific publication, it is not acceptable to submit a report of original research to multiple journals at the same time. Journal editors are likely to be more disturbed by this if it looks deliberate rather than like a simple mistake (not realizing that a foreign-language journal “counts,” for example) or if the case is debatable (a small section of results was published in another paper, but the new paper adds tons of new material). Remember those forms from the 10th most common mistake? One of them asks about previous submission or publication. We need authors to be up-front about any other articles in the pipeline, even if (especially if) they’re not sure if they might constitute duplicate publication.
3. Failing to protect patient identity. Yup, there’s a form for this too! Any time a patient is identifiable, in a photograph or even in text (as in a case report), authors must have the patient’s consent. (Contrary to popular belief, the gossip-mag-style “black bars” over the eyes are not sufficient to conceal identity.) Usually we hear complaints about this, because studies are written long after patients are treated and it can be hard to track people down, but them’s the breaks. If it’s really impossible to obtain after-the-fact patient consent, editors will work with authors to crop photos, take out details, or whatever it takes to “de-identify” patients.
2. Not matching up all the data “bits.” In the abstract, 76 patients were randomized to receive the intervention, but it’s 77 in Table 1. There was a 44.5% reduction in symptoms in the medicated group in the text, but later it’s 44.7%. Sometimes this is because the abstract is written first from the overall results, while the data in a table are more precisely calculated by a statistician; or maybe the number of patients changed along the way and no one went back to revise the earlier data. Either way, it drives copy editors crazy.
1. Not reading a journal’s Instructions for Authors. These days almost all scientific journals have online submission, and almost always there is a link to something called “Information for Authors,” “Guidelines for Manuscript Submission,” or something similar. Judging by the kinds of questions editorial offices receive almost daily, authors rarely read these—but the publication process would often go so much more smoothly if they would.
We are proud of our style manual, although we realize it isn’t the last word in scientific style and format. There can never really be a “last word” because some editor will always want to have it! Anyway, without authors there wouldn’t be anything to edit, so we would never hold any “mistakes” against them. No matter how grievous a manuscript’s misstep, an editor will be there to correct it, because it’s our job. (But mostly because we can’t stop ourselves.)—Brenda Gregoline, ELS
Thanks for the serious but not heavy-handed list. I plan to share it with colleagues who think they are developing a manual of house style — I’ve been known to drop AMA heavily on meeting-room tables to make a point about the work involved. One observation about Mistake Number 1: Experience teaches that most journals don’t follow their own Instructions for Authors (eg, they claim to follow AMA reference style but insist on tweaks). I’m afraid that the narcissism of small differences is alive and thriving in editorial offices. But thanks again for the post.
This post contains the seriously misguided notion that citing paywalled references is inherently erroneous (where it conflates “password-protected, subscription-only” into the same category as unresolvable URLs or other “wacky” things “otherwise useless to the reader”). The era when all STM journal publishing is on sustainable open access models may be coming, but it isn’t here yet (2016), and it wasn’t when this post was written (2012). Please tell me that someone has disabused the author of this misapprehension by now, and that maybe a bracketed editorial correction could be made in point 5. The theme of this post (ie, an informed person disabusing underinformed people of their errors) is undermined by such a basic misconception of how STM publishing works and whether citing paywalled references is a mistake (it’s not). Don’t get me wrong: I look forward to a day when all articles are open access because an economically viable path for that has been blazed. Such a path would include how copy editing contractors even get paid under such models, let alone at compensation levels that aren’t half or a quarter, in constant dollars, of what was once the norm for in-house editing with company health insurance. But in the meantime, it is wrong to think that a physician is making a mistake by citing a paywalled reference in an STM article.