increasing annotations during Open Review

Screenshot 2018-10-21 16.23.58.png

One of the most helpful parts of Open Review is annotations.  As I’ve written about elsewhere, these annotations really helped me improve Bit by Bit.  However, the proportion of readers who actually annotated Bit by Bit was quite low (less than 1%).  So, some authors might wonder: how can I get more annotations on my manuscript?  In this post, I’ll offer three ideas.  If you have other ideas or data about the effectiveness of any of these approaches, please let me know.

Continue reading

Guest post: so you want to post your book for Open Review

This guest post was written by jimi adams and was originally published on the blog Scatterplot.

I never envisioned myself as a book person. I’d grown pretty comfortable writing in article-length ideas. I work in interdisciplinary fields though, so the length of articles varies considerably (I’ve submitted pieces anywhere from 2500 to 14k+ words), but books felt like a completely different animal.

screen-shot-2018-10-09-at-8-29-51-am.png

I recently finally took the book plunge, first for a SAGE “little green book” on Gathering Social Network Data mainly because it’s a book I’ve wished existed quite a few times when teaching social networks at various points and to a range of audiences over the past decade or so. Also, the QASS books being more in the 40k word range than a “full” monograph closer to 90k, it felt like a good way to “ease in.”

A couple of days after submitting my manuscript for peer review, I bumped into Matt Salganik at the ASA annual meetings, and he suggested I consider posting it for Open Review while it was undergoing the traditional review process. The basic idea of Open Review is to make available a version of the manuscript that’s readable — and hopefully commented upon— by anyone who’s interested and willing before the text is finalized. Matt framed the utility of Open Review as potentially bolstering sales once actually published, and likely making for a more readable manuscript. It was the latter that intrigued me, but it’s likely his claim about the former that led the publisher to give it a whirl.

After a conversation with Helen Salmon at ASA, and some follow-up emails with others at SAGE about how they’d feel about the idea, they gave me the go ahead to try this with my manuscript. Matt had convinced me the conversion should be relatively straightforward, so I started trying to figure things out.

I had written my manuscript using the LaTeX template SAGE has for the QASS series. It was my first real attempt at using LaTeX, so that’s not my “native language” or anything, but I found the transition to that given some background with html, etc. relatively navigable. So at submission I had a LaTeX manuscript. The question was how much work that was going to take to convert to something usable for open review.

Matt and some colleagues developed the Open Review Toolkit, which converts a Markdown manuscript into a set of webpages, which incorporate the hypothes.is web mark-up capabilities into a site to host the open review process.

In theory, pandoc should allow for conversion between LaTeX and markdown relatively smoothly. But in my experience that wasn’t the case. Basically all tables, some figures, and most cross-referenced information (e.g., citations) didn’t port accurately with the attempts I made. I asked the recommended developers that Matt had worked with what it would take for them to help with the conversion. They quoted a price that I’m sure was reasonable, but since I don’t really have a budget to support this project, even that was more than I was ready to spend out of pocket. So thought I’d see if I could make it happen myself with the available documentation and a few support emails. My threshold was that I’d give it three days, no more.

I ended up converting the whole manuscript to R-Markdown, in large part because there were some things that I’m aiming to start using that for elsewhere (mainly teaching-wise), so figured it would be good opportunity to learn (also new to me). At that point, I also stumbled across the bookdown package that helped with some of that. In bulk, that worked pretty well. But I still needed to re-build the tables, and re-format the captioning of the figures. Perhaps most problematically, some of the cross-referenced citations worked seamlessly, while others didn’t. This is probably the result of my being a newbie to LaTeX, and having used quite a few different citation code formats that may have not been compatible.

So, that meant that I had to work through those elements manually to fix for the markdown version. For tables and figures, that was relatively straight forward. But for the citations, etc. I basically had to read my way through to find the errors, since I wasn’t sure exactly which ones had and had not worked. You’ll probably still find quite a few places where I didn’t catch all of those needed fixes (sentences without a subject referencing someone else’s work).

Getting from R-markdown to markdown was a relatively simple conversion within R-Studio. So now I had a Markdown version of my manuscript. It was then quite directly implemented in the way the ORT is designed, as is described here. I.e., that was the real instruction manual I used for building the site I ended up with. The only real changes we made at that point, were a few points that I thought were useful adaptations of the versions posted to their GitHub site (Luke Baker added footnote support, since I tend to speak and think with parentheticals embedded in basically everything I say, and adapted the way Figure callbacks are available online). All in all, once I had a markdown book, the process was pretty workable. And I wrapped it all up well within my 3 day limitation, even with having to do a fair amount of the table/figure conversion into markdown manually.

Was the process worthwhile? Well, honestly, we’ll have to wait to see. So far, people seem to be reading the manuscript. But the amount of feedback I’ve received so far is thinner than I’d hoped for. Maybe this will help. Maybe I should add a “deadline” that will motivate people not to assume they’ll get back to providing such feedback later (I know that was something that limited how much of Matt’s book I made it through online. But from an N of 1 if SAGE is reading this, it is what sparked me to buy his book almost immediately upon it’s release).

So, it’s up. I hope people find it useful, and welcome any and all feedback you might have to make for a better book.

jimi adams is Associate Professor of Health & Behavioral Sciences at the University of Colorado Denver.

Machine translation, internationalization, and the Open Review Toolkit

 

The Open Review Toolkit is designed to help create better books, higher sales, and increased access to knowledge.  All of these goals—especially increased access to knowledge—could be advanced if all books could be published in all languages simultaneously. Unfortunately, that’s not possible yet. But, machine translations can help us move in the right direction, and so the Open Review Toolkit has excellent support for hosting the same manuscript in many languages. In this blog post, I’ll describe our experience machine translating my new book Bit by Bit Social Research in the Digital Age into more than 100 languages. The process took just a few days and cost about 30 US dollars per language.

Continue reading

Open Review Toolkit: Versioning

After my book, Bit by Bit: Social Research in the Digital Age, went through Open Review, I was excited that the First edition was finally ready to be printed and posted online. Then I ran into a problem.  How should I post the First edition online while preserving the Open Review edition and all of the accompanying annotations?  In this post, I’ll explain the problem in detail and describe what we ended up doing.

If you want to skip all the details, here’s what I recommend for other authors using the Open Review Toolkit:

If you are interested in the gory details, keep reading. . . .

Continue reading