“When you can count, count.”

— Skip Fischer

When I was training to be a historian, I suppose starting with MA work, I managed to internalize the notion that “real” historians crunched data, drew conclusions from it, and presented it to their audience in the form of tables and graphs. One thing that most of the “real” history books I was assigned to read had in common was lots of tables and graphs. It would be reasonable to assume, then, based on what I just wrote, that I began to pay special attention to these tables and graphs, with a growing ambition to create my own as I developed into a “real” historian myself.

Reasonable such an assumption might be. It would also be wrong. I disliked tables and graphs. I never read them. My attitude was—and to some extent, still is—that the author had better tell me what conclusions are to be taken away from the tables and graphs without my actually having to peruse them. When I read a book, I want to read words strung together in well-crafted English. I do not want to look at numbers and make my brain do math.

The next reasonable assumption might be then, that I had a problem: I was training to be a “real” historian, I had internalized that “real” historians do research that results in tables and graphs, and yet I disliked tables and graphs, and thus was unlikely to want to do research that would lead to my producing them. This assumption would be correct. As superficial as it may come across, I think that helps explain why thirteen years passed between finishing my MA and beginning my PhD.

 Doctoral training is, of course, literally on a whole other level, so I learned that “real” historians do all sorts of things, and by no means must one do the sort of research that leads to the production of tables and graphs. Nevertheless, as my study progressed, I realized that where it was headed was a shelf of books full of—yep—tables and graphs, so much so that I found myself often having to say to myself in my head, “I am not an economic historian, I am not trying to become an economic historian.” That was important, as I have had precisely one economics course in my academic career, as a college sophomore, and my performance in it was less than remarkable.

I found my way, and here I am, five years out, in the late stages of research for this second book. I’m still not an economic historian and have no ambition of becoming one. Yet, as you can see from the image that accompanies this post, I am sitting here producing—tables and graphs. What’s weirder than that is how much I am enjoying it.

I think this is best explained by an adage that has proved among the more generally applicable to life so far as I can tell. It can be expressed something like this: It’s striking how one can suddenly develop a keen interest in learning and using a skill for which, in the past, one had no such interest, when that skill proves necessary to accomplish something about which one actually gives a damn. I think this also gets at a perennial problem of pedagogy: trying to teach students skills and knowledge you know they need, but in which they have absolutely no interest, because they are not yet aware of when and how and why they will ever need those skills and that knowledge. If you gave a damn about learning your multiplication tables in fourth grade, good for you. I can assure you I did not.

Nevertheless, I’ve been getting up and going about my morning looking forward to sitting down and making columns of entries with numbers in a spreadsheet, computing percentages and making a series of preliminary tables that, on the best days, lead to one concise final table that answers the question that started the whole process. And then, I can open the Insert tab in Excel, hit the Recommended Charts button, and voilà! A beautiful colored graph appears next to the table, just like that. If I re-sort the table, it re-draws the graph accordingly. This is delightful! Whodathunkit?

So, as it turns out, tables and graphs are tools—means to an end—and I am learning to appreciate those tools and how to use them because I need what those tools can give me. I’m just compiling numbers and percentages and estimates; I’ve never had a statistics course in my life (“I am not an economic historian…”), but I need that stuff for this book. Those numbers have interesting stories to tell, and I’ve set myself the task of telling those stories.

As a bonus, I get to feel like a “real” historian in a way I haven’t yet, despite my accomplishments. I readily grant that this is a completely idiosyncratic sort of satisfaction; I’ve tried to explain where it came from. Other historians, I’m sure, feel like “real” historians when they first write a piece they feel really good about. We all come to this with different skills, interests, limitations, and prejudices.

I will end by thanking my fourth-grade math teacher, then. He also had a sign on the wall of his classroom that I hated at the time and love now: “If you don’t have time to do it right, when will you have time to do it over?” But that is a topic for a future rumination. If you have just read all this, thank you.

Second Sultana research trip

On Thursday evening, we arrived safely back home from the second research trip to Chestertown, Maryland, to examine the remainder of the log books and the muster books. If you don’t know what this is about, here is a brief run-down; see earlier posts for more. I’m working on a “microhistory” based on the four-year cruise of HM Schooner Sultana, built at Boston in 1767 and taken into the Royal Navy the following year at Deptford naval yard just outside London. The detailed records kept by the Navy allowed for an accurate reconstruction of the little vessel (she is, we think, the smallest vessel ever commissioned in the Royal Navy), and the new Sultana was launched in 2001. She is owned and operated by the Sultana Education Foundation of Chestertown, Maryland, which owns complete copies of all the master’s and commander’s logs and the muster books (lists of crew and their statuses) for the entire cruise, July 1768 to December 1772.

On the first trip, in September 2019, I completed the master’s logs, got about 10% of the commander’s logs, and looked at the muster books enough to know what they contained and how to approach examining them. From the master’s logs, I spent almost three months of full-time work extracting and analyzing sailing data, as well as instances of interception; Sultana‘s role was as a customs interceptor, helping to enforce customs duties and interdict smugglers at British American ports on the Eastern Seaboard, from Halifax to Cape Fear (where I live).

On this trip, I finished it all. (Well, almost; I took high-res images of what I didn’t have time to go through, and I’ll do that here at home.) Working was a little bit different this time; to keep me out of the main building for health reasons, SEF set me up in the rigging shop, at a long work table. I worked through the documents where several of Sultana‘s tackles were hanging, their blocks having been freshly varnished. Below, on the main shop floor, some of her spars rested on saw horses, awaiting fresh coats of paint. Her topsails and yards, wrapped in plastic, hung high up from the walls for the winter.

The President of SEF also made copies of his copies of two student papers on provisioning and clothing, as well as a complete copy of his copy of the sailing directions for various American ports, written in commander Lt. John Inglis’ own hand. (Fortunately, his hand is unusually neat.)

I want to thank Drew McMullen, the aforesaid President, for this and all his help, without which this project would never have started. I want to thank Aaron Thal, Sultana‘s captain, for the interview he sat down for via Skype before we went up, and for his logistical assistance in getting the heat set right and lending me some keys, and generally making me feel welcome. Most of all, I want to thank my patrons, whose donations made this trip possible, and the National Coalition of Independent Scholars, for the Research Grant.

I’ll be working through this stuff over the next two to three months, and I’ll post updates here. I hope to have an edited draft of the book this summer. Stay tuned, and thanks for reading.

Quantifying inference: The source challenges of the master’s logs

Exactly three months ago, I started transcribing the sections of Sultana’s master’s logs recording her ocean passages—156 days total, from August 1768 to early December 1772, for the purpose of extracting all the data and information on her sailing performance that I could possibly wring out of them. Ultimately, this project generated 38 separate summary documents and spreadsheets, the records of processing this information through multiple stages of analysis. I also plotted the wind directions and Courses Made Good for each of the 156 days on a 360-degree compass card pasted into 156 individual Word documents. Tomorrow, I am going to read a book. (“Required reading” for the project, but still, it’ll be nice to look at a different set of black characters on a white background for a few days.)

Working with this data brought up the basic challenges of trying to extract information from most historical sources. Those challenges start with this basic one: the source was not created for you, or with your agenda in mind. Whoever created it did so for their own reasons, and it’s highly unlikely it ever occurred to them that a historian three hundred years later would have the slightest interest in it. David Bruce’s log was written as standard procedure for the master of a vessel in the eighteenth-century British Atlantic, not for me. That set me up for the following basic challenge. My ultimate hope was to learn what sails Sultana was flying on what points of sail (angles to the wind), and in what wind strengths. This is the core of sailing performance, especially when the day’s progress is recorded, which in a log, it is. Unfortunately, I had to go about approaching this task in an oblique way, as the log only tells me some of what I want to know.

Generally, the log records three to four wind directions for the 24-hour day (noon to noon); North Atlantic weather is more fickle than steady. It also records between one and five wind strengths, along with other weather conditions—such as “moderate” or “strong gales and squally.” The third element of the equation is a Course Made Good (CMG) for the entire day’s run. Problem Number One is that the correlations between wind direction and wind strength are rough rather than exact. Much of the correlation must be inferred, by considering all the information presented. Problem Number Two is that only the day’s CMG, not the individual courses steered, is presented. So, if the wind came from three different directions that day, and it’s clear from the Remarks column that the crew made several sail changes indicating more than one point of sail, then we can assume that more than one course was steered. In order to know precisely the point of sail Sultana was on at any given time, we need to know what course she was steering when the wind was coming from a specific direction. The log rarely gives us that. So, plotting her points of sail meant plotting wind directions and CMG for the day, and inferring the points of sail from the angles of those wind directions to the CMG. Those inferences are, by necessity, approximations.

Problem Number Three is that Bruce is clearly inconsistent in recording sail changes. This is perfectly understandable. He was sailing master of a small vessel with 25 crew aboard, trying to stay alive and afloat in the unforgiving North Atlantic in all the wrong months of the year. He had much to do. He records that he reefs sails that he has not recorded ever setting. He lets one or more days pass without recording any sail sets or changes, yet then makes a remark that makes it difficult to assume that his sail plan was simply carried on from the last entry. Again, inferences must be made, and the unavoidable element of guesswork must be accepted.

So, in the end, my tables of percentages to the first decimal point would be most misleading if I did not qualify them with disclaimers like these; they imply a degree of precision of interpretation that is simply beyond our grasp at this point. We can say the same about so many such quantitative exercises in history—perhaps most of them. We have to take care not to ask more of our analyses than they can provide, and we must make clear to our readers what we have done, what assumptions we have made, and why.

Having said that, I am excited about what my analyses ultimately suggest. I am fortunate in that I should have the opportunity to discuss the results with the captain of the current Sultana; insights from his experience should clarify much of the uncertainty that remains, though even then, there are sure to be judgment calls of Bruce’s whose reasoning will elude us. We cannot, in the end, get inside another person’s mind. We can only know what they thought to tell us. Even then, that’s only what they wanted to record, or their own understanding of why they did what they did, and there’s likely even more to it than that. But that is a discussion for another post.

Letting the evidence lead

Letting the evidence lead image

I’m deep off into transcribing parts of a ship master’s log. There’s the early, painful period of learning the writer’s handwriting (which in this case is particularly bad), and with any eighteenth-century script, learning the orthography, frequently by comparing one weird character to a similar one somewhere else in the text, thinking about context, until finally it clicks and you say “oh, it’s a Q!” I’m past that; it’s uncommon now that he stumps me. And, since I’m accustomed to the pattern of the content, and of his phrasing, I can transcribe pretty fast, despite the complete lack of punctuation and proper capitalization.

So, with the actual work now just being a matter of getting through it, and being careful, my brain has time to think about what to do with it. I’ve found myself thinking about how to use it, how to relate it to the secondary reading, what other primary-source material I may need, and how the book is going to take shape. Is it going to prove worth doing, or is the original purpose going to elude me, ultimately? Who’s going to publish it? How do I write it so that they will?

After I knocked off work yesterday, and had a little Brain Adjustment Juice (rum), I told myself to back off of all that. I reminded myself that we work like scientists; we have to let the evidence lead us where it will. Let the sources speak whatever they have to tell. We can’t do that if our own noise is getting in the way in our heads. It takes courage; we’re turning over control of the process to people who died over two hundred years ago, the only remains of whom are the scratchings on these pages, and what those scratchings represent and convey to us from across all that time and disparity of experience. If we’re going to hear all that, we have to be quiet and listen carefully. We can’t analyze the data until we have the data. We don’t impose our preconceived agendas on the evidence. The evidence may well demand a revision of any such agendas. That’s OK; that’s how we write good history.

The Importance of Tedium

I’m taking a short break from “real” work to write this little blog post, mostly because composing prose makes me happy and compiling spreadsheets is just work—work like weeding a flower bed, or rolling coins—monotonous, tedious, dry, dull. But also meticulous—every little detail—which makes it worse, compared to, say, pounding nails into a fence or scrubbing a deck. Mustering the discipline to keep at it, hour after hour and day after day, is also work. But this is the work that makes the payoff of doing history (or archaeology) possible. When you read the book or watch the documentary or tour the exhibit, you don’t see the mountain of tedious work that lies beneath that final product.

importance of tedium image

This is not a moan-and-groan. What I’m doing right now is going through the master’s log of HM Schooner Sultana—or, rather, my notes on it, as I went through the log itself over ten tedious work days last September in Maryland—and compiling a spreadsheet of every recorded encounter she had with another vessel in her four years on station in British America, from fall of 1768 to fall of 1772. I’m in the summer of ’71 right now and I’m up to Line 243. Date, location, vessel name, vessel type, from, to, cargo, action, outcome. Again. Again. And again.

But such tedious compilations are the gold mines of history. From this spreadsheet, I can think of so many interesting extrapolations already. What percentage of vessels stopped were schooners like her? What percentage of those were on coastal routes? Island routes? There’s a whole list in my head—and soon, there will be a list on paper. The fun is in manipulating the data to answer questions—but first, you have to compile the data. And that may be tedious, but it also requires understanding the source material. I could hire someone off the street to enter words in a spreadsheet, but I can’t hire someone off the street to understand what they’re reading so they will know what words to put in the spreadsheet. I’m trained to do that, and I’m the one who read the original source document. So it’s up to me, and knowing what I’m going to be able to get out of this keeps me going with it.

There will be more spreadsheets to do. And more source documents to spend more tedious days reading. But, in the end, it will all result in a great book that will bring this all back to life. My parents grew up knowing that if you wanted a good cotton crop, you’d be spending day after hot exhausting day chopping weeds with a hoe in the fields. This is a far cry from chopping cotton, but the analogy holds, at least so far as it can, given that we’re talking about work you do sitting on your butt in an air-conditioned room. Speaking of which, it’s good to get up frequently, blink your eyes, walk around, do something else for a few minutes. And then get back to it. The only way to the end starts with Line 244.

15 June 1771, near Sandy Hook, Diana, brig, Liverpool, New York, deal goods, F[ired 3 guns]

continuity and change in american racism

On a late spring afternoon, my mom was standing in line at the Big Star getting groceries before heading home to make dinner for her husband and tend to her baby. In the checkout line, a voice came over the loudspeakers and announced that MLK Jr had just been shot and killed downtown. The checkout boy smiled and said “They finally got the son of a bitch.”

That was April 4, 1968, in Memphis. I was the baby. To just about any of you reading this–including my mom–this will seem like a long time ago. Two likely reactions to this vignette, in light of when it happened, are: 1) That was a long time ago and things have changed a lot since then, and all this yelling about racism is seriously overblown; and 2) That was a long time ago and apparently nothing has changed, despite the civil rights movement and all the laws and trying to re-educate people and all that. So we’re screwed.

Both reactions would be understandable, and fortunately, they are both wrong.

As a historian, it’s my job to pursue abiding questions in human history whose examination takes place over the long term, regardless of current events. But it is also my job as a historian to offer whatever insights I think history might have to offer as we grapple with pressing problems that may seem of the moment, but are actually deeply-rooted in our history.

The fact is that things HAVE changed a lot since I was born. As humans, it is natural enough that we would prefer to focus on that, because it feels good. But there is another fact, on which we would rather not focus, because it makes us frustrated, angry, and depressed. That fact is, of course, that things haven’t changed as much as they needed to, or that we’d like to believe they have.

It is true that changing deeply-embedded social attitudes takes time, and that fifty-odd years is not that much time in societal terms. But we mustn’t use an acknowledgement of that reality as a cover or excuse for the fact that this society has been largely stalled on that front for much of that period. Until we acknowledge that, and understand it, we have what seems like a valid reason to do nothing and just wait for “time” to take care of it. It won’t. It will just get worse.

That’s because racism is still well-ensconced in the bedrock of American social consciousness. That’s obvious right now, in that the most vocal, overt racists have been emboldened by the politicians they helped put in office to act out in ways no one my age or younger has ever seen. (My parents, on the other hand, saw plenty of it–and much worse.) While of course it causes most of us great dismay, it’s encouraging in a way that this is happening, because it forces the rest of us to pay attention to the fact that something we would like to think has gone away is still very much with us. On the other hand, it would be easy to comfort ourselves by assuming that these are the ONLY racists, and that if we can just isolate and neutralize them, we can isolate and neutralize racism–and, after all, they are a desperate fringe on the political ropes, right?

If only. But racism in this country–and elsewhere in the world–is much bigger and deeper than that. We can root it out, but only if we acknowledge its existence, call it for what it is, and then start doing something about it. We haven’t done that yet. That’s why things haven’t gotten any better than they have. I hope some of you find that encouraging–especially those of you much younger than I. You can do this–if you want to. The problem is, this country has never really wanted to. And that’s largely because racism, as I said, is much bigger and deeper than the shouters and the shooters. The history will help explain that.

A big question students of U.S. history ask is, Why doesn’t the U.S. seem to have a class system, the way our mother country and other Old World societies do? There are two answers to that. The first is, we do, but we don’t like to acknowledge it, because it interferes with our central national cultural myth of equality and opportunity. The second, though, validates the question, because it is true that our class system is different. Part of the difference is structural and part of it is psychological, and one reinforces the other. The difference begins with the absence of a hereditary aristocracy in this society. It was never transplanted, because hereditary aristocrats have no need to go hack and sweat their way through a new world; they’re already masters of the one they’re in. With no firmly-ensconced upper class, there was room for the creation of a new one based on something other than birth. While early America certainly inherited the cultural assumptions of “good breeding,” and they meant something far more substantive by the term “gentleman” than most contemporary Americans could ever understand, the class structure was more fluid here.

The second major reason for the distinctiveness of our class system from that of the Old World is racism.The entire British American enterprise, once it began to take off in the 1640s, was based on the wealth produced by captured, transported, enslaved Africans–over 12 million of them. The more we study the early modern British Atlantic world, the more clear that becomes. That was, of course, unmistakable in Barbados or South Carolina, but it was just as true of the commercial ports of Liverpool, Boston, and New York–just one or two steps removed. Racism as we know it–the assumption that one ethnic group is inherently superior to another, giving it the right to rule and exploit–or at least to denigrate and marginalize–the other group–coalesced, hardened, and wrote itself into law as the slave economy grew.

The Civil War was fought against slavery, not against racism. The national effort to ensure a new society in which black Americans had an equal opportunity to participate lasted 12 whole years after one of the worst wars in modern history, before the country lost interest and let the Southern white supremacists take over again. Why? Racism.

The Civil Rights Movement of the 1950s and 1960s WAS a fight against racism. It enjoyed remarkable and unusual success in changing the legal structures that had institutionalized racism, particularly in the openly-apartheid South.

But racism was never, and is not now, a Southern phenomenon. When MLK’s marchers demonstrated against housing discrimination in suburban Detroit, they were attacked and beaten by mobs of local whites. The ghettos of Northern cities weren’t there just because people like to live around their own people. They were there because those people weren’t allowed to live anywhere else.

In 1970, the Federal judge for whom my dad worked wrote the first court-ordered busing plan to desegregate public schools in compliance with the law. It was called Plan Z. The death threats were bad enough that he was under the protection of the U.S. Marshals for a time. But by then, Nixon and the Republican Party had already successfully deployed their new “Southern strategy,” exploiting the resentment and fear of Southern whites who were watching the riots in the North and California on TV, and who were happy to support calls for “law and order” and appeals to “the silent majority.” Ironically, in that sense, George Wallace had run as an open segregationist independent, and had taken enough votes from Nixon that his victory was narrow. It wouldn’t be in ‘72. I used to think that a neo-Wallace candidacy would be impossible now. That was before 2016. A lot of us were happily complacent about a lot of things before 2016.

And this past week, a subcontractor for my wife’s parents was arrested after brandishing an automatic rifle at a group of Black Lives Matter protesters. That brings us up to 2020.

Racism is tightly bound up with poverty. That’s one of the main reasons it’s so hard for us to get our heads around it–those of  us who are trying to do that, anyway. The two feed each other. When you set up a society in which one ethnic group is systematically, automatically disadvantaged–which our ancestors most certainly did, whether in Philadelphia, Pennsylvania or Philadelphia, Mississippi–that group is going to be disproportionately poor. That poverty will persist, generation after generation, artificially amplified and structured by an entire complex system of discrimination. Both the discrimination and the poverty will wreak havoc on the souls of those in the affected group, as they always do with human beings. Crime, self-destruction, hopelessness, an inability to take control over one’s own life and impose direction and purpose on it–these are the classic individual and social pathologies of poverty and disadvantage.

The white poor share much of this, of course. There are more white people on welfare in this country than black people, simply because there are more poor white people in this country than poor black people. But black people are affected out of proportion to their numbers, because the other element–the racist discrmination element–affects them uniquely.

That brings us to something MLK Jr and others figured out early on: there is no fighting racism without tackling poverty–black and white. And brown–historically, there was no place in the U.S. where you would find a more toxic concentration of the pathologies inflicted on a group of people by poverty and marginalization than on an American Indian reservation.

Lower-class American whites have always been able to direct their resentment toward black people rather than at the classes above them. “Race” trumps class in this country. That is the final piece of the answer to the question about the American class system. Those with more wealth and power–white of course–have always been able to exploit this lower-class white resentment whenever it suited their interests. But that paints poorer whites as passive dupes, and a close examination pokes holes in that assumption. “Why don’t poor white Americans vote their class interest?” ask perpetually-befuddled observers from outside our society. The answer is, because they are voting their RACE interest, which carries very real privilege. They believe this privilege gives them an advantage in life, and that surrendering it–allowing the non-privileged to have equal access–is acquiescing in their own subsuming.

This is the fallacy of the zero-sum game–the fallacy that Adam Smith famously exposed in 1776 when he touted the advantages of free-market capitalism (as we call it) over imperial mercantilism (closed-trade systems). The fallacy of the zero-sum game is that wealth–or power–is of finite quantity, such that if I take a bigger slice of the pie, it leaves everyone else a smaller one. In the zero-sum game worldview, it’s us against them. Take yours before someone else gets it. But the best thing by far that free-market capitalism taught us is that wealth can be created. We can create opportunity.

It is true that, today, we are being forced to face the very worst thing about free-market capitalism: that it assumed unlimited growth based on unlimited resources, and resources, it turns out, ARE limited, so we must learn not to be wasteful and to turn our one-way throw-away economies into more circular ones. That has people scared, and it fuels that defensive instinct to hold on to what’s yours and to see the disadvantaged as a direct threat–especially if you are already on the bottom rung of the ladder.

But there is more to it, and the rest of it is not as easy to read, or as easy to excuse. The rest of this will not shy away from some ugly truths. Historians can’t afford to do that. What follows may offend you. I certainly hope so.

We humans have an innate desire to feel superior to each other. It takes an exceptionally healthy and happy soul to be free of that. Primates are socially hierarchical. When you feel close to the bottom of your society, you are likely to cling fiercely to your sense of superiority over anyone below you. As Gene Hackman’s character relates to his FBI partner in the movie Mississippi Burning, his poor-white father once said to him, “Son, if you’re not better than a nigger, who ARE you better than?” It is certainly true that saying such a thing out loud is not as accepted as it used to be. That is because things HAVE changed. But it is equally true that a lot of Americans still feel that way, whether they admit it to themselves or anyone else or not. Things haven’t changed as much as they need to.

Rich people, of course, can afford to distance themselves from such impoliteness, and always have. They don’t have to speak the words and they don’t even have to feel them. They just reap the profits from the entire system. Their sense of superiority is on much firmer ground.

But even MLK didn’t ask us to stop judging each other. He just asked us to stop judging each other based on ethnicity. That was in 1963. We name roads after him and have a holiday for him, but that is self-congratulatory. He was far more radical than we’d like to acknowledge. He understood the connections between racism, systemic poverty, the distractions of imperial wars, in which the shock troops are poor kids in uniforms.

Here’s what racism really truly looks like in everyday, “respectable” life in our society. I am going to put things into words here that are not my words. Please do not lift them out of context or quote them so that they might be mistaken for my own words and attitudes.

It’s not just the overweight working-class white thirty-something brandishing the automatic rifle at the protesters. It’s your nice, unassuming retired aunt who reluctantly admits, after a few drinks and more than a little pressing, to being racist. She would never brandish a rifle in public. She deplores that. But she votes.

It’s the retired couple from up North who, in the course of calmly justifying their choice in 2016, sincerely offer the observation that, in their experience, black people are the only group of people who refuse to work–who just want everything handed to them. 

It’s the guy on the corner yelling at the BLM protesters because the police have good reason to be jumpy with black men, since most of the criminals are black men, and once in a while, a mistake will be made, and there’s no such thing as white privilege when white people have to work for everything and black people just get special treatment. And it’s too bad about the kid in the Chicago projects but the sad truth is, he would have just grown up to be a drug dealer anyway. That guy votes.

And it’s the rich businessman in the two million dollar house on the river with his gigantic homemade sign who would never admit to being a racist. He votes. And donates.

Here’s the crux of today’s American racism: The problem with black people is black people, simple as that, and while the rednecks with their Confederate flags are obnoxious and distasteful, they aren’t wrong. Americans who subscribe to this will be careful about when,where, and to whom they say this out loud–if they say it out loud at all–but it’s there. And it is either the tacit or explicit opinion of the majority of people who have decided many of our most recent elections.

And that will only change when those of us who do not share this opinion out-vote them. The numbers are there. But not enough show up. And then they get gerrymandered…but we have made real progress pushing back on that. Now is the time to take advantage of it. Racists vote.

But in everyday life–when it’s not election day, and when there’s no protest–we just have to start calling out racism for what it is. In my experience, that has not led to huge unpleasant scenes. Then again, I’m not putting myself in social situations with the hard-boiled and then challenging them. That’s not for me. If you can do it, you have my respect.

Calling it out for what it is will be easier when we know it’s out there and we’re prepared to hear it from “nice” people. Then we’re not so taken aback, kicking ourselves later for not saying anything. It IS out there. It’s everywhere. And as long as it can hide, it’s safe.

One thoughtful push-back to a racist comment, and one vote, may not seem like much. But that’s everything. That’s the solution. If you’re doing it, you’re not the only one. And network-theory researchers have proven that one such action has quantifiable, exponential ripple effects that you, as the instigator, will never know about. But don’t take it on faith; there’s evidence.

That checkout boy in 1968 might or might not feel emboldened to say the same thing today. The fact that we don’t know means we have lots of work left to do. But it’s work we can all do, in our regular lives, while we go about our regular business. I don’t remember if my mom said anything to the checkout boy. What I remember is that she was terrified, as they announced the city-wide curfew effective immediately, in anticipation of rioting, and she rushed to get home to her baby. We all have our own lives and our own people to take care of. What I do know is, in 1968 in a white grocery store in Memphis, if she did say something reproachful to him, she would have been in a distinct minority. That is not true now. For all those who suffered so much to get us to that point, and for ourselves and our kids, let’s build on that.


Getting A First Book Published: A Walk-through of the Process

image for getting published blog post

I’ve digested the publication process from finishing a manuscript to release, for new scholars who haven’t gone through it yet, or for anyone who’s just idly curious. My experience with Brill was nearly perfect, so I wouldn’t expect it to get much better than this.

I submitted my dissertation in February 2017. I knew I wanted to write a book based on it, but that the book would be more than a revision and expansion of it. I gave myself two years to do the research, write the book, and get a contract. The best possible circumstances under which to do that are far and away under the relatively lucrative umbrella of a postdoctoral fellowship, which is analogous to a residency. For either one year or two (two if you can get it), you are paid enough to live on while you work on the book project. Most, but not quite all, are tied to a specific institution; you are required to be in residence there. Some have a light teaching load attached. Since we were not going to be relocating, those were out. I applied for the rest, and got none of them. They are of course ridiculously competitive. Humanities and social science funding in this country is not exactly a national priority. Fortunately, I have a spouse who is happy doing something that makes her a good living. My only direct financial assistance was a $400 grant from the Society for Nautical Research in the UK which allowed me to hire a capable researcher to do some in-person work at the National Archives and the Bristol Archives, which was most valuable.

I had the manuscript close enough to finished to begin querying publishers in the fall of 2018. That requires writing a book proposal. I got some help from someone I knew who had been through this on how to do that, and also looked it up on more than one potential publisher’s website. I first submitted to ****** UP because they had just launched a series that I thought could be a good fit. They quickly informed me otherwise. Next, I went to the UP of X that publishes one of our primary journals and the monograph series that goes with it—both of which were co-founded by one of my mentors, who sat on their editorial board. Alas, he had died in the meantime. They rejected it too, which surprised me, but there is absolutely no point in dwelling on that for five minutes (as is true for applications to schools and for grants).

At the same time (December 2018), I had submitted to Brill, when I realized that they had a series of monographs in the history of technology that seemed an obvious fit. Also, the series editors were, unbeknownst to me at the time, two scholars with whom I had worked to put together a conference panel that fall. One of those had actually presented a paper in the panel I had organized, and we had chatted quite a lot at the conference. So, they knew me and knew my work. Brill’s acquisitions editor responded while the UP of X still had the proposal and said he was interested. I put out a panic query to people who had been through this, who were kind enough to advise me to put Brill off nicely while I waited for UP of X. Most fortunately, Brill was still interested after UP of X passed, and so the proposal went from their acquisitions editor to the series editors—the ones I knew. Lesson: while simultaneous  submissions are not forbidden in book publishing, as they are with journal articles, I don’t think I would do it again. Too potentially awkward and stressful. More Important Lesson: presenting at conferences can be worth far more than it might cost, and far more than you might realize at the time.

An aside is in order here. The acquisitions editor works for the publisher. He’s basically a buyer, though he can be as involved in the editorial process as he wants. In this case, given that there were two series editors and an editorial assistant, he was not involved at all past the acquisition, so far as I can tell. Series editors do not work for the publisher. They are scholars in the specialty who probably conceived the series themselves, and are primarily responsible for approving new titles and overseeing the editorial process once a title is acquired for the series. The acquisitions editor cannot accept a book for such a series without their consent. By the way, the only people here who are getting paid are the publisher’s employees—the acquisitions editor and the editorial assistant.

By 18 January, the series editors had approved the acquisition, and within a week or so, I had submitted the complete manuscript to Brill. The next step is to send it out for anonymous peer review. This entails the publisher’s asking two (usually) scholars in the field to read the manuscript and opine whether or not the publisher should publish it, and if so, with what revisions. It can take time to find the reviewers and it takes some time for them to review the manuscript. I was fortunate; Brill was prompt in doing this, and the reviewers were prompt in getting it back to them. Also, in this time period, the series editors will make their editorial recommendations for revisions.

It was late July before I knew for certain that the book was going forward, based on the submission of the external reviews and the series editors’ reactions to those. With so many cooks in the kitchen at this stage, it may be necessary to do a little polite inquiring as to status, as the line might get cut between one or more of the parties. This proved necessary for me. I had a back-and-forth with one of the series editors, who rode point through the process, and we got exactly on the same page with what revisions were going to happen. The series editors have full latitude to do what they want with the external reviews. Ours were mostly helpful, but the series editors did not want to follow every single suggestion. (External reviewers can be grumpy, even if they basically approve of the work.) While I was working on the revisions, I was also working on acquiring suitable images and permissions to use those images. This is a Royal Pain in the Ass and I Am Not Lying, but there’s no getting around it. Fortunately, the publisher had given me a handbook that treated in detail all aspects of getting the revision ready, including this aspect, and the editorial assistant was there to help if I needed her. She and the series editors had definite opinions about the illustrations and captions, and I welcomed that. None of their suggestions were objectionable to me. Keep in mind that the publisher will dictate what types of, and how many, illustrations you may use. Much of that has to do with cost. Brill was perfectly happy with lots of color plates, but then again, my book costs $153.00. Publishers who want to sell books more cheaply, let alone put them out in paperback, will generally not allow such extravagance.

I object to it for more than one reason, but repositories will charge you big money for the rights to reproduce images from their collections. The more widely they think the publication will be distributed, the more money they want. This can amount to hundreds or thousands of dollars. My main series editor knew from experience that German archives and museums don’t do this, so he advised using them. In the end, I found that the same was true of the Swedish Museum of History in Stockholm, which owns the originals of most of the technical drawings I wanted to use. This was a godsend. In the U.S. and UK, however, forget it. The ed. asst. decided she really wanted a certain image in there that’s owned by The National Maritime Museum at Greenwich. It cost £50, which Brill paid. The publisher might be willing to pay modest costs here; they will stipulate that in your agreement.

You have to have a permission form signed by the copyright holder of every image you want to use, and those forms have to be transmitted to the publisher, as they are potentially liable for any copyright infringements (although they stipulate in your contract that you are ultimately responsible for making sure you are not violating copyright).

Also, at this point, I filled out an Author Questionnaire for them, which compiles the information necessary for marketing. You select the key words for search engines, you write a short abstract, tell them who you think the readership will be, and you tell them about specific journals you think they should submit the book to for review, and specific awards for which the book might qualify.

I submitted the revision and all the permission forms late November 2019. So, ten months since initial acceptance. The series editors then have to read and comment on the revision. If they approve it, then at that point you will get a contract. The publisher will not formally commit to publishing the book until they have the requested revision in their hands and have approved it. I signed a contract on 19 November and they had received it by 6 December. (They’re in the Netherlands.) Finalizing images and permissions must be completed before the book can go into production. This required some back-and-forth up to Christmas 2019.

The contract spells out the respective obligations of author and publisher. Basically, the author commits to completing revisions and other work stipulated by the publisher in a certain time frame, and the publisher commits to publishing and marketing the work in a certain time frame, provided the author has satisfied their stipulations. It spells out who has copyright and what use may be made of copyrighted material by the author. These terms, in my case at least, are quite generous toward the author. The contract will also specify number of author copies, percentage of author discount on additional copies and on other books, and how royalty distribution works. (In general, one does not realize royalties on academic books.) I found the contract easy to read and, while my impression was that there might be some wiggle room for negotiation, the terms were fine with me as initially offered.

The book went into production 6 February 2020, and at this point I was working with a production editor. Her job was to see the book through the production process and, ultimately, send it to the printer. Meanwhile, she would be working with me, on the one hand, and a typesetter on the other. The typesetter isn’t really a typesetter anymore, but someone who takes the files and converts them into final form for publication. The files they are given are already damn close; you have used the font they asked for (their house font, which you download), you have formatted everything exactly as stipulated, and you have copy-edited the crap out of the manuscript (if you’re good, you’ll still miss a few things; I’m good, and I did).

Somewhere in here, the book will go up on the publisher’s website as a forthcoming title. This same page will be where buyers can order it when it is out. In our case, there was also a PDF flyer that could be distributed.

The series editors and the production editor commented on the submitted revision, and we made a few minor tweaks. Then she sent it to the typesetter for conversion into proofs—the final-layout form that I would then proofread, correct, and send back. Meanwhile, I had to compile the index, minus page numbers, as those would only be available once the proofs were done—and even then, only valid if the pagination didn’t change during the correction process. (The index took me two weeks of full-time work, in two separate stages. I’d never done one, so of course I looked up formatting rules in the Chicago manual. If you were using MLA or whatever, you’d look it up in theirs.) I submitted corrections to the first proofs on 13 March. All of the errors I found were mine, not theirs. Fortunately, none of them were major enough to mess up the pagination of the proofs, so I was able to paginate the index too. It’s important to note here that the publisher expects a clean copy of that revision to send to the typesetter the first time. They have to pay the typesetter and they do not want you coming back wanting major edits that mean the typesetter has lots of work to do to re-work the proofs. In fact, they reserve the right to charge you for it if you do. You are only allowed to correct typos. You cannot decide that you want to re-phrase your assessment of Smith’s book on spinning wheels.

By 31 March, they had sent me the final proofs to check and I had let them know they were fine. By 10 April, the book had gone to the printer. The e-book was published on 14 April and the hardback on the 16th –two weeks earlier than the final publication date on the website. So, the entire publication process from acceptance to publication was about sixteen months. At this point, they will be selling it to academic libraries, primarily.

The process is, I’m sure, somewhat different for established scholars, let alone distinguished ones. I hope this is a helpful walk-through for other first-timers or aspiring academic authors.





A quick update…

So quick in fact that I’m going to bullet-list it:

  • It so happens that two books are coming out on the same day–30 April–the edited collection to which I have contributed, and my first monograph. The Publications page has all the details; I just updated that. (Link will open in a new window.)
  • I am getting toward the end of the stack of secondary-source reading for Book 2. I probably have another six weeks. Speaking of Book 2…
  • As I wrote on the Publications page, and sent out over social media, I was unable to secure funding for the second necessary archival work trip to Maryland this summer, despite applying for all the grants and fellowships I knew about. So, to keep the project on schedule, as I’ve already applied for a big 2021 grant for Book 3, I started a GoFundMe campaign to raise the necessary $2,100, or as much of it as I could. If you would like to know more about that, it’s on the Publications page, and there’s a link at the bottom of each page of the website. We’re about 2/3 of the way there. Contributions in any amount are appreciated and will be properly acknowledged. (Links will open in new windows.)
  • I hope everyone reading this is well and getting along all right. We’re fine here.

No ships, not much history, but hear me out…

(It’s a how-we-think-about-technology sort of piece.)

One of my chief goals as a historian of technology is to help myself and my readers understand the tacit assumptions we make about technology without realizing it–what we take for granted about it as truth rather than what it actually is: our perspective, which, like any limited perspective, can do much to obscure, rather than reveal, the whole truth. That’s just a specific example of perhaps the chief goal of history as an intellectual discipline: the enterprise of getting beyond our own limited perspective far enough to realize that that is, exactly, what it is–a limited perspective, based on limited experience, and that the perspectives and experiences of people in the past were different. (Anthropology does the same thing, but for people removed from us spatially and not necessarily temporally, though anthropologists study them, too.)

The most common assumption we accept about technology is the assumption of “progress”–that overall, despite some bumps and hills and valleys, technology is improving. Of course there are many ways in which that’s true–at least partially, but it inevitably oversimplifies, at best, and greatly distorts, at worst, what’s really going on.

Perhaps the most helpful basic caveat we can apply to the general notion of “progress” is to acknowledge the reality of pros and cons–of advantages and costs. Every technological choice has a cost, whether or not it has benefits and regardless of what those benefits may be. When we remember to consider the costs of a technological choice, we ensure a fuller understanding of that technology, and our relationship to it. And by “cost” I do not simply mean a figure of currency. I mean the trade-offs that must be accepted when choosing one technological option over another.

I find today’s automotive technology, and the marketplace in which it’s bought and sold, an especially helpful example of how to think about technology accurately. At this point, the technology has been around long enough to have existed in more than one cultural milieu. It has been heralded and condemned, loved and loathed, credited for making “modern life” possible for most people and convicted as a glaring example of why “modern life” must change. I don’t have the time or space to write a short history of the automobile in cultural context, so I’ll stick to an assessment of it in my own society right now, and refer to its broader history in passing.

I write this at the moment when the internal-combustion-engine-powered automobile has peaked, and is now on its way out. If ever there were a perfected technology, we have experienced it in the cars of the early 21st century. A 2006 car, built toward the top of the quality scale, is as sound a rejoinder to “they don’t build ‘em like they used to” as we’ll find. Those of us with some years and miles on us remember the cars of our youth, and it’s only those who don’t know much about cars who think that the cars of any time before 2000 were “better” than those of our own day.

I might as well go ahead and use the word “better” so we can expose it for the intractable problem it poses to really understanding technological choice in human life. As noted, I think the example I just used is as strong a defense of the use of “better” as we’ll find. A 2006 car is likely to be safer, more reliable, more durable, more comfortable, faster, better-handling, more fuel-efficient, less polluting, and equipped with more convenience features, than its closest equivalent from any point in the past, while the cost of that car, new, remained within reach of the middle-class buyer, and the durability of that car meant that, used, it presented a more attractive option to the buyer on a stricter budget.

What I just wrote is a fact, because I did not use the word “better” in an overall, general sense. I can defend, with specific data if necessary, any of the specific comparisons I just made. But observe what happens when I write this: “Any 2006 car is better than a ‘67 Pontiac GTO.”

1967 GTO

Photo by Greg Gjerdingen, CC-BY-2.0,

Say that in certain circles and you will be more or less attacked, and not just because you have unwittingly stumbled into a group of crazy people. Without delving into the details of why this is so, we can safely lump all those under “aesthetics.” From the style of the body to the sound of the engine to the nostalgia and associative power of the older car, it has a combination of attributes that, to its devotees, make it “better” than what they could buy for the same money as an outstanding example of the old muscle car is now worth—say, a brand-new Mercedes-Benz E400 mid-size sedan, full of technology no one was even thinking about in 1967.

MB E class

Photo by Vauxford, CC-BY-SA-4.0,

Comparing more recent cars to each other requires distinguishing between differences more subtle, easier to overlook, but still important. Like any technology widely-used, the car is shaped not just by designers and engineers seeking aesthetic and physical performance attributes, but by cost constraints, materials availability, and a legal-regulatory environment. Right now, the legal-regulatory environment is driving automotive technological choice perhaps more than any other pressure. In 1967, few were concerned about fuel consumption or emissions. There were no laws requiring drivers and passengers to wear seat belts, though the technology did exist. But the car runs in a drastically different cultural milieu now, in which only the reactionary and oblivious are not acutely concerned with the reduction of consumption of fossil fuels and CO2 emissions. Seat belt laws are only one item in a long list of mandated safety features. Those safety features have saved so many human lives from what would otherwise have been fatal accidents. We are rapidly developing viable all-electric cars, and electric motors lend themselves quite well to that application, with their instant generous torque, quiet operation, simplicity, and longevity. Batteries are another matter, but enough resources are being thrown at battery technology that we have already seen substantial lengthening of range and shortening of charge times just in the past few years. Meanwhile, CAFE (Corporate Average Fuel Economy) regulations and their equivalents in other countries have driven automakers to make substantial changes to powertrains and pursue weight reductions to eke out 1 to 2 mpg more fuel economy per vehicle per model cycle. Weight reduction is an across-the-board win, except in cost; it is usually more expensive to make a vehicle with the same strength, rigidity, and longevity but lighter weight, as it requires more expensive materials, such as aluminum alloys developed for aircraft and high-strength steels. Power train changes for fuel economy is what I want to focus on, because that is where the valuable illustration lies of technological relativism, so to speak.

While it is too early to know, I strongly suspect that the power trains of the internal-combustion-engine-powered car peaked in the first decade of the 2000s in terms of longevity, reliability, and aesthetics (sound and feel). While such advances as electronic fuel injection (widely-available since the 1980s), variable valve timing and lift (1990s), and the six-speed automatic transmission (early 2000s) eliminated waste and contributed to long-term reliability, power trains in this period could be over-built and under-stressed. Most engines were normally-aspirated, rather than turbo- or supercharged. Computer control kept everything running in tight spec, contributing to the smoothness, low maintenance, and efficiency of the power train, and allowing transmissions to respond immediately and smoothly to input, keeping the engine in its optimum power band. By 2010 or so, these power trains were proven to be good for virtually limitless service with only basic, and inexpensive, maintenance. Exhaust systems with catalytic converters and oxygen sensors put out quiet, minimal exhaust an order of magnitude cleaner than what was possible in 1967, without compromising the performance of the power train.

But the tightening of the regulations was relentless, as governments pursued more ambitious targets for the reduction in fuel consumption and CO2. Pushing past the old maxim that “there’s no replacement for displacement,” automakers began to substitute smaller engines for larger ones, adding turbocharging and direct injection to make up the power and torque losses. Six-speed transmissions quickly became yesterday’s news, replaced by seven, eight, nine, and even ten-speed boxes. As for the manual transmission, the joystick of real driving, it has become almost a unicorn. Computer-generated, synthesized “engine sounds” are now piped through the sound systems of “sporty” vehicles, as these new powerplants cannot deliver the sound of a normally-aspirated V8 or V6 (or, for that matter, a high-revving performance-built I4 or I6).

So far, these efforts have paid off; the manufacturers are meeting regulatory requirements and meeting demand. But what about the costs? It’s certainly true that new cars are seriously expensive relative to past markets. They have to be. And with cars just a few years old so excellent, buying new is nowhere near as compelling a choice as it was when I was young, when cars weren’t as durable and new ones were cheaper than they are now. Aside from purchasing costs, though, there are—or may well be—others. Aside from aesthetics, the new power trains may be decreasing reliability and durability across the board (it’s early yet). Adding turbocharging to a small engine adds stress and heat to that engine, and complexity to the power train. Direct injection has proven to introduce premature carbon build-up in some engines. Newer transmissions have had trouble finding and holding the right gear, and are programmed so aggressively for fuel economy that they tend to up-shift too early for  optimum engine performance, unless put in “sport” mode, and thus defeating the entire purpose for their complex and relatively-untried existence. For those of us who celebrated the development of the attainable car to near-perfection in almost every way, lending itself so well to long-term satisfying ownership, the latest developments raise concerns that we may be going back to an automotive marketplace more in line with the short-attention-span, disposable-goods culture that we so desperately need to get away from. I hope not. I don’t think the challenges posed by the new technologies are insurmountable for automotive engineers. But I do wonder whether they will be given the time necessary to work out the kinks in them before they are phased out to be replaced by something else. Some of that probably depends on how quickly electric cars become viable on the mass market.

Regardless, one cannot say that today’s cars are “better” than the cars of ten years ago. Once we are forced to define what we mean by “better,” it’s instantly clear that “better” in some ways means—or likely means—not as “good” in other ways. What are your priorities? What are you willing to sacrifice to have something else? That is how technological choice always works; it’s just that we so often don’t see that, because we live under the illusion of “general progress”; everything is basically getting better all the time.

No it isn’t. It is just getting different.

I know I’m getting to that age where it’s natural to become suspicious of the new and to cling, to some extent, to the familiar. But I also know what it’s like to own a well-built, satisfying car for many years and like it just as much as I did when I got it, or more. So, next time, I’ll be buying a used, over-built, normally-aspirated V8-powered modernized throwback with a sterling reputation and not-so-sterling fuel economy, and hoping gas stays “cheap” for a few more years.