Friday, 19 December 2014

The State We're In, Part 3.b: Two Billion Pounds!!!

In my post yesterday I said that the government had slashed funding for research in UK HE to 'more or less zero'.  "But, hold on there, Grumpy", you might say, "I read in the papers that the money awarded to UKHE from the REF is two billion pounds per an. - two Billion POUNDS - two thousand million pounds!  How do you call that more or less zero?"  To which I say, yup, it's a fair cop.  Two billion pounds is an unimaginable amount of cash to you or me, readers.  In fact, it's shitloads.  Why, even the highest paid man in the UK would take nearly 23 years to earn that much.

...

Hang on...

...

So what we're saying is that one person could earn in twenty three years what the entire nation is willing to spend per an on the higher educational research activity of over fifty thousand academics (see below) across the entire country. (Actually, given that that salary is two years old, he probably earns more now so it'd take less time.) Let's reflect on that for a moment.  Put another way, assuming there are other people earning that sort of amount, or even slightly less, five of them would earn the same, over the next five years, as the entire nation is going to spend (per an) on higher educational research.  Or we can look at it another way. Any one of the twenty-five richest people (or people 'and family' - tax dodge) in the UK (as of May this year) could dip into their fortune and pay for the whole country's annual higher education research bill, and still leave themselves with a fortune of between 1.43 and 9.9 billion pounds. 1.43 billion quid, by the way, is 53.9 times the average wage in the UK.  In other words the average UK wage-earner would take nearly 54 years to accumulate that amount of money, and even that would assume that s/he was able to save up 100% of their salary!  As yet another abstract formulationA 10% levy on the estates of only the *twenty-five* wealthiest people in the UK (leaving them only with fortunes of between £3bn and nearly £11bn...) would yield £17.1 billion, a sum that would match the government's spending on the research activities of over 54,000 UK academics for the next eight years.   I'm just sayin'.  But let's reflect on that a little.  That tells us quite a lot, doesn't it, about wealth difference and the economic priorities of neo-liberal capitalist economics.  Is that the sort of country we really want to live in?

Anyway, let's leave that to one side for now.  Two billion quid goes to universities to pay for research.  That still can't be bad.  How does that work out?  By my reckoning, there were 54,893 academics entered into the REF.  I don't think that one researcher could be entered into more than one panel but even if they could it would be a minority.  Let's round the number down to 50,000 to be on the safe side.  £2,000,000,000 divided by 50,000 works out at £40,000 each.  That sounds OK.  At first.  But £2,000,000,000 per an won't even cover the wage bill of the academics submitted (even at 2011 rates).  Of course, academics are not only paid to research, but to teach and administer too.  The common formula for research active staff in older universities at least is 40% time on research, 60% on teaching and admin. At that rate, then, the 2 billion will cover the relevant wage bill of the full time researching academics.   But we also have to factor in the wages of fairly numerous essential lab staff in the science departments, as well research librarians and research assistants in arts, humanities and social sciences, administrative support staff and the temporary lecturing staff bought in to cover for full-time staff on research leave.  That means that the budget is unlikely to contribute even one penny to the cost of research equipment, or 'plant costs' (maintenance of buildings, electricity, etc.), which in science departments are understandably astronomical.  Even cheap humanities departments require annual library and computing budgets to maintain any kind of research viability.  All of that now has to be financed from other sources.  That means student fees to a large extent, but even then £9k per an is not far above the cost price of a university education (including teaching resources which admittedly can sometimes double for research) leaving very little for research.  Well, fair enough, you might say, if you buy into the US-style neo-liberal propaganda, why should I pay, through my taxes for someone else's university education?  Why? Because culture, civilisation ('m not even going to be drawn into the economic benefits etc).

All this is one reason that everything has begun to turn on research grant income, at the expense of research quality, the thing that drove ICL's Professor Grimm evidently to take his own life (on that subject I can recommend nothing better than this post by the Plashing Vole).

I am also assuming, in all the above, that the money would be divided equally.  But obviously the point of the REF is that it isn't.  It is moderated to some degree by a department's place in the league. Therefore, for every person or department whose research is only adequately funded, let alone those few whose budget is enlarged, there is another department, or someone else, who is correspondingly underfunded, whose institution is no longer able to pay for them to research.  This may drive those people out of the profession, force them to teach more and research less or even force them onto teaching only contracts with no time to research at all (and while we are on that subject, you can't say, that's OK - academics should spend their time teaching the paying student: if you want to teach someone how to be, say, a historian you have to be a practising historian), force them out of the country to work elsewhere, force departmental closures, and so on.  All this seriously diminishes UK culture.

Let's look at it a third way. As of 12 September 2011, the UK had spent £123.9 bn on the bank bailout (but had planned on spending - and been exposed to the risk of spending - one and a half trillion pounds (£1,500,000,000,000).  By comparison, then, the bankers received what, at current rates (assuming our £2bn is an annual budget), the UK would be willing to pay for all of higher academic research for the next sixty-two years.  As I have said, £2 bn does not cover the cost of research.  Let's assume that the actual cost of UK academic research is three times that.  It still means we have spent on the bank bail-out enough to cover two decades of top-level research in all disciplines right across the UK (and stood ready to shell out the equivalent of full funding all university research in the UK for, at current rates, over two centuries...).  And one might want to ask what would be a better use of the money: bailing out irresponsible unregulated money-launderers who hold the country to ransom (we allegedly can't touch them because they'd all leave and now - post Thatcher - our entire economy is supposed to be dependent upon the financial wild west that the City has become), or people who work hard to improve, in all sorts of ways, the quality of life (leaving aside the economy etc etc) of the nation.  Well, you decide.  Maybe tell your MP...

But either way, two billion quid is really not a lot for what the nation as a whole gets back.  You have to ask whether, as a reward, it is worth the financial and cultural costs (not to mention the stress, the suicide) that come with the REF.

Thursday, 18 December 2014

The State We're In, Part 3.a: Listmania

So.  The results of the REF (Research Excellence Framework for non-academics, or non-UK academics or UK academics that have been hiding in a cave for eight years) are in (or out, depending on your preferred idiom).  Consult the lists to your heart’s desire.  They will be spun one way or another, stressing one performance index over another, on every university website across the country for months to come.  This seems as good a time as any to resume my thoughts on ‘The State We’re In’ (Part 1; Part 2; plus search for the 'State we're in' label for other scattered interim thoughts on various issues)

Well, there’s (I suppose) good news, bad news and (actual) good news.

First the (I suppose) good news.  My department came 2ndout of the 83 history departments in the exercise.  Yay, woo!  And actually this is in some important ways good news.  It is good news because some of my colleagues, notably our chair of the research committee and our head of department put in very long hours of tedious work, not always helped enormously by somewhat thuggish ‘powers that be’ higher up in the university, and it is very good news that that hard work gets some sort of serious recognition.  It is also good news in that it represents in some ways the culmination of a process that has been under way for ten years and in which I think I have played a significant part, of turning the department from one that had for decades had no ambition (other than to be some sort of Oxford feeder college) and had more or less institutionalised mediocrity, into a serious player in historical research in the UK.  This provides some reward to all the people who have contributed to that.  It is also good news because we are a very good history department.  I have some very good and interesting colleagues, especially at the younger end, doing good work in new areas.  It is good to have some sort of public indication of that fact; it is good to get some reward for the hard research work we have all put in.  It is, furthermore, good news to see some departments who are somehow supposed to be ipso facto the best in the country slither down to something approximating their actual intellectual worth, though only because it might (though actually it won’t) make them think twice before assuming that anyone graduating from or working at another university is somehow some kind of lesser intellect, and about instilling that misplaced sense of intellectual superiority in their students.  It might make someone in the general non-academic world realise that there is a disjuncture between privilege and prestige on the one hand and merit on the other.  It might even make prospective students (graduate and undergraduate) realise that going to those gilded places will not necessarily get them the best tuition, or expose them to the best historical minds. 

It is also, and I think this is very important, good news– indeed an excellent outcome – given the generally humane way in which my department (and our university management on the whole) have managed the whole REF business, compared with horror stories from elsewhere.  There have been no threats or other bullying strategies, and I hope that perhaps university management culture might make a note of this.  Sadly that wasn’t the case in the institution which produced the top placed history department, which drove at least one fine historian out of the profession altogether.

[Personally – and I take no pride in this but I have to be honest here – I also take some unedifying satisfaction in seeing departments that drove me out through bullying, or which have serially considered me to be beneath them, or which contain other people who have actively hindered my career, come out many places lower than the department where I work. This, the 'ha, fuck you then' response, is the natural response; it is the response encouraged by the system; it is the wrong response.]

But (the Bad News) this all comes at a cost. 

I am happy for my colleagues that they have got a serious reward for their hard work. I am happy that we have serious recognition as a good history department.  Don’t get me wrong about any of that.  

But I am very wary indeed of the bragging that might ensue, wary of suggesting that this means we really are better than (almost) anyone else, even contingently, temporarily, even taking (as I said in part 1) the exercise to be a sort of FA Cup contest, as though historical scholarship were like a race or FA Cup contest where one side definitively could be better than another.  I am wary of suggesting that my colleagues in other departments might be worse than us on this basis.  The risk of suggesting the above is serious and inherent in the league table culture.  We must work hard to counter it (though we won’t for the reasons I set out in Part 1)

Then, where is the real reward?  When the REF (or RAE as it was then) started, the point of the exercise was to divvy up the money the government gave out to universities to fund research.  Now of course, the government (and indeed the last Labour government – let’s be clear) has basically cut that to more or less zero.  So where do the rewards lie for all the hard work put in by chairs of department, chairs of research committees, and the ordinary rank and file researchers? The reward is located first and foremost in university bragging rights (‘we did better than you, ha ha ha’ [see italicised paragraph above!]), league table positions and so on.  This is good news for Vice Chancellors looking for an excuse to increase their pay packet yet further (while putting a brake on that of all the people who did the hard work) but not so much for the rest.  Why?  Because now there is precious little government funding so universities have to find other means of finding money.  And those means put them all in competition with each other.  To get funding we have to attract students, in a zero-sum game, and the league tables’ only value is in that game.  Or we have to get grants (in a situation that has led to at least one suicide in recent months), in a climate where grant income counts for more than actual research value.  All this ends (well, it ended some Time ago) the situation which ought to exist, where academics see themselves as collaborative, cooperative, fellow seekers after knowledge rather than members of competing cells.  Second the participation, the general gloating and publicity all strengthens the whole dynamic that I discussed in Parts 1 and 2, which produces the situation where any government can get the HE sector to dance to any tune: that, in other words, produces the state we are in.  This is all a high price to pay.  It is bad news.  I feel that someone in a department that (deservedly) did well in the exercise and who has put in good submissions in the last two exercises is best placed to make that criticism.

The other bad news is that proportionately far less goes on recognising actual quality research than it used to.  On the one hand part of the submission in terms of research environment concerns research income (see above).  But research income is not a valid recognition of research quality.  For one thing it is what comes out of a project that should count, not the amount of money that went in (however much the latter delights university accountants).  Secondly, what gets the money very often constitutes intellectually pretty lame projects, listing things and putting them on line.  On the other hand, a large part goes on ‘Impact’ – the many drawbacks with which have been pointed out over and over (not least by science departments, who have done best by the system and thus are best placed to make the critique) and hardly need repeating.  As far as history is concerned though, one additional problem is that the system provides little benefit to those who do not work on British or modern (or preferably modern British) history.

A third piece of bad news concerns the numbers themselves, which are entirely subjective judgements made by small panels, not always of the most respected or research productive academics within fields.  Some would say that the data are not robust.  More to the point, the fact that the numbers can be arranged sequentially is highly misleading.  Look at the history list and you will see that Lancaster University comes in twenty-three places below my department.  “Woo”, you might say, “the Lancaster historians must be loads worse than those at Poppleton.”  But look again at the evidence (and essentially to be a historian is to master the art of looking again).  If you count the GPA of Birmingham (in 1st place) as 100%, then Lancaster came in with 94%, whereas we got 99.6%.  That is a pretty fine difference for twenty-three places in the league (or visually, on the page or computer screen, a big drop of the eye).  Indeed by the same reckoning, the history department that came in thirtieth was still scoring near enough 91%.  So all these league tables, all this listmania, have a seriously misleadingeffect, in addition to all the other detrimental effects the league table culture has on higher education, scholarship and research.  Yet, those big visual drops of the eye (rather than the actual numbers) are what will put some people's jobs under pressure.

But here I want to shift tack again and spin this a slightly different way to end on what I think is some (actual) good news.  One bit of good news is that the table does at least shake things up a bit and suggest that the many good universities of the UK are all really pretty similar – that it is not a case of Oxbridge and a couple of others versus the rest of the pre-‘92s and then all of them against the post-‘92s.  What I would hope is that this shaking up might make research students apply to the university where the scholar best –placed to supervise them is working, rather than according to established institutional prestige.


More importantly than that, using the criteria mentioned above, even the bottom-placed history department scored 58% compared with the top.  The departments at the bottom of the top 51 were scoring 85%.  What I would like to suggest this means, and what I would like to suggest would be the best, the most humane, conclusion that the British historical profession ought to take away from the REF league table is that historians working in UKHE – across the board, from the top to the bottom of the list are producing significant amounts of good work.  That is actual good news and I want to end on this point, for now.  This is what as a profession we should be proud of, not institutional bragging rights.  Or, as Young Mr Grace used to say, “you’ve all done very well.”

Wednesday, 17 December 2014

A Review of Barbarian Migrations and the Roman West

[I was recently sent this very kind review by Professor Hal Drake of Barbarian Migrations... Sadly this never made it into press as apparently the journal to which was sent went bankrupt.  I hope you will not mind me posting it here.  It means a lot to me, as someone trained essentially as an early  medievalist and who then drifted backwards into Late Roman history,* to receive these words of approval from a highly-respected specialist scholar of the late Roman Empire.

(*I think it is still true to say that most late antique specialists are trained as classicists and drift forwards.)]

Guy Halsall. Barbarian Migrations and the Roman West, 376-568. NewYork: Cambridge, 2008. Pp.xvi, 591. $41.99 (US), paper.

Of the many debates that perennially swirl around the topic of the Fall of Rome, none is more enduring than the one between those who blame it on internal problems (corruption, decay) and those who cite external pressures (barbarian invasions). The latter view was memorably formulated by André Piganiol in the 1940s: 'Rome did not die a natural death; it was assassinated.' In this refreshing, detailed, and highly informative look at the period of Rome’s fall in the West, Guy Halsall comes down decidedly on the side of the internalists, but with a new twist. The depredations caused by the arrival of new peoples receive short shrift in his pages, but old-fashioned moralizing is replaced by a keen understanding of the role patronage networks, political structures, and social identity played in binding provincials to the imperial center By blending detailed local analysis with the traditional high politics, Halsall depicts the fall as the 'cumulative effect of myriad choices by countless people' who were 'frequently, if not always, trying to do the opposite' (168-9). Far from passive and dissolute, the empire did not die quietly: 'It went down kicking, gouging, and screaming' (281). The result is a Solomon-like contribution to this debate: 'The Roman Empire was not murdered and nor did it die a natural death; it accidentally committed suicide' (283).

Halsall divides his study into three major parts. The five chapters in Part I, 'Romans and barbarians in the imperial world,' bring readers up-to-date on the debates and issues surrounding this period, which saw the Roman empire in the west replaced by numerous successor kingdoms. The central part, 'A world renegotiated: Western Europe, 376-550,' covers the period frequently characterized by the 'barbarization' of the Roman army and the depredations of barbarian invaders. In these seven chapters, Halsall meticulously surveys changes in the provinces as well as the imperial center. Part III, 'Romans and barbarians in a post-imperial world,' moves beyond the immediate question of Rome's Fall to consider the means by which new states were formed out of territories formerly ruled by Rome. Overall, his aim is to show how Rome dominated the prestige market in the early centuries, and through patronage and gift-giving made barbarians as much as provincials eager to identify themselves with Roman government. This Roman monopoly broke up in later centuries, leaving the way open for new identities to form around the nascent kingships in the western territories.

If all of this sounds like simply another way of saying 'Rome fell,' it is because no summary can do justice to the richness of Halsall's presentation. He demonstrates complete mastery of issues old and new, and puts advances in archaeology to especially good use. Particularly important is his use of processes of identity formation that have been developed in recent decades to counter 19th century notions of a static ethnicity produced by inherent racial characteristics. He is withering in his critique of this outdated concept, which underlies most of the standard accounts of 'barbarian invasions' and 'Germanic kingdoms.' In his pages, ethnicity is an acquired, not a hereditary, trait, something that is continually changing and adapting to new circumstances. In line with much recent scholarship, Halsall also disputes long-held theories of Rome's military decline, arguing that 'barbarization' was actually the result of conscious decisions by Romans to adopt such a persona (90). Instead of focusing on population decline, Halsall points out that the empire continued to possess 'considerably greater resources of manpower than the barbarians' (144).

Halsall's gift for capturing dense issues through an apt analogy helps the reader grasp the import of his findings. At one point he likens the emperor to 'a small, and not especially powerful, light-bulb' (141) to explain the importance of patronage; at another, he conveys the political and permeable nature of the northern frontier by likening it to an 'Iron Curtain' (141). As these and the quotations in this review indicate, Halsall is a vigorous stylist. Although he uses the newest techniques, he is not a slave to them, and while he is judicious, he does not mince words. Migration theory, he points out, 'has yet to be employed to explain anything' (418), and a new fascination with DNA evidence reflects a 'current vogue for forcing modern archaeological science to yield answers to old-fashioned and crudely formulated historical questions' (452).

Efforts to minimize the impact of the invasions produce some tortured reasoning, such as his argument that the barbarians amounted to no more than 'a small percentage of Europe's population' and their movements no more disruptive than that produced by the transfer of a few Roman regiments (455-6), or his argument that the constant squandering of resources in internal disputes proves that Romans themselves did not think these incursions significant (chs. 7, 8). If there is a whiff of special pleading in such assertions, it is a small price to pay for a book that contains so many treasures. Halsall has pulled off the difficult trick of writing a textbook that can be read with profit by anyone interested in this large and enduring question.  
H.A. Drake University of California, Santa Barbara

Tuesday, 25 November 2014

Another post updated: Transformations of Romanness

I have updated this post from last year, to represent the finished text sent for publication rather than the one originally delivered.

Sunday, 23 November 2014

Society, Individual, Exclusion: Update

I have updated this post, the text of which is now the version submitted for publication rather than that delivered at the Padova conference.

Tuesday, 11 November 2014

Blog-post Published

As of today, this post - one of the more viewed blog-posts on this site - is published:
Two Worlds Become One: A 'Counter-Intuitive' View of the Roman Empire and 'Germanic' Migration Guy Halsall
German History 2014 32 (4): 515-532
doi: 10.1093/gerhis/ghu107

Friday, 31 October 2014

Ethically-engaged Early Medieval History-writing. Why it matters and how (not) to do it ... or 'They was asking for it!'

Every year I do a session on 'History and Ethics' with the students on our Public History MA course.  As you know, this is a topic that matters a lot to me, and about which I have blogged before (see posts with the label 'the unbearable weight' and/or 'ethics of history'). 

I like to open these discussions with these two passages written by a well-respected and prominent early medievalist.(1)

The first comes in the context of a passage trying to minimalise the extent or severity of Viking attacks on ecclesiastical persons and property.

In 860 the monastery of St-Bertin was attacked but the community had plenty of warning.  According to the author of a translatio written a generation later ... all the monks fled save four 'intent on martyrdom, save that God had to some extent decided otherwise'.  ... [The Vikings] had been 'hoping to capture some monks' - after subjecting three of the four who were 'older', thin and wasted' to 'painful acts of scorn and mockery' (such as pouring liquid into the nostrils of one of them until his belly was distended) [they] tried to take away the fourth, 'more succulent than the rest'.  The idea was surely to take this one for ransoming.  He was the only one to be killed.  He refused to go quietly, throwing himself to the ground and insisting that he wanted to die on the spot, where he might be buried 'in the cemetery of his ancestors, and his name be entered on the commemoration lists of his brother monks'.  Apparently out of sheer vexation at his obduracy, his captors began to beat him with their spear-butts', then pierced him with spear-points' ... and the cruel game got out of hand.  [I have added emphasis.]

The second comment is in the context of alleged Viking 'pillage, plunder and rape':

Among all the Annals of St-Bertin's references to Viking plunder and pillage, there is no mention of rape, and this is significant, given these Annals twice mention the episodes when the followers of Christian Carolingian kings committed rape, in one case the rape of nuns.  It hardly follows that Northmen never raped: it does seem that they were not notorious rapists. [Emphasis added.]

The second passage in particular tends to shock the students; the first less so (though I think it is every bit as bad).

I am not concerned with the 'Viking atrocity' debate, within which this is situated and about which I have written before.  What concerns me is the ethical implications of this piece.  It is one thing to say that the Vikings were no worse than anyone else (an argument with which I would agree), but this goes far beyond that and into an ethical relativisation of violence, torture and rape.  That, as you might imagine, I find objectionable, offensive and irresponsible.

Let's look more closely at the first paragraph.  Here, above all, the blame is shifted onto the victims.  The monks allegedly had 'plenty of warning', so they should have got away.  Silly old monks!  It was after all the four monks' own 'decision' to stay behind.  Wouldn't they learn?  Then we essentially have the torture (including waterboarding) of frail old men passed over without any comment, other than to quote the description in the contemporary source, which in this passage is in any case explicitly being 'read against', of 'painful acts of scorn and mockery'.  Finally we come to the monk whom the Viking wanted to take away. 'Surely', says the author, this was to ransom him.  Why 'surely'?  If this intention was attested in the source, the term 'surely' would not have been employed.  Why was the intention 'surely' to ransom him, rather than - say - to torture him to death for fun, to rape him (he was after all 'more succulent than the rest'), or anything else?  [Note that rape is only envisaged in this passage in heterosexual terms, although 'painful acts of scorn and mockery' is a broad category.]  And then maybe ransom him?  That interpretation is the modern historian's assumption and that seems to me to be unwarranted.  Why, we are authorised to ask, make that assumption?  Anyway, the monk refused to 'go quietly' and this excuses the Vikings (driven 'apparently by sheer vexation at his obduracy'; why 'apparently?'  Again this begs serious questions) for their act of pig-sticking him slowly to death.  Just to add the icing on the cake, this lengthy and painful murder is described merely as 'a cruel game'.  Boys will be boys, eh?  What can you do?

The second passage requires little by way of commentary.  There is, apparently, rape and then there is 'notorious rape'.  We might call this the 'Ken Clarke view' of history.  Whether the relative mention of rape by Christians and pagans might have other explanations within early medieval textual strategies is not considered.  Here, frequency of mention equals relative notoriety.  Frankly, I don't think much else needs to be said.

So - why does this passage vex me so much (other than the fact that is written by someone who, one would like to think, damn well ought to know better)?

For one thing it raises the issue, which I have discussed before, of the historical 'statute of limitations': how far back in the past do events have to be before it becomes acceptable to write about them like this?  Let's leave the obvious 'limit case' to one side for now.  Would it be considered acceptable to consider the Austrian troops rather brutally occupying Serbia during the Great War in this fashion, playing down torture and rape because allegedly no worse than what the Serbians did?  At what point does it become acceptable to go beyond the sometimes necessary but nevertheless pretty vapid 'well they were no worse than anyone else' argument(2) to actively playing down violence like this - actively introducing victim-blaming and assumptions about (*relatively*) benign intentions to the aggressors?  When?  Those Serbian civilians: they had plenty of chance to escape you know.  If they didn't flee, well that was their decision.  If they didn't want to 'go quietly' who, I ask you, among us, can blame the Austrian soldiers for shooting or bayoneting a few of them?  At least they weren't 'notorious rapists'.  Transposed to that context, the nature of the writing really becomes apparent.  Don't ninth-century people (and these were, I assume, real people - at least the author assumes so) deserve better?  Don't they deserve the same respect?

When the historicising of violence goes this far - as far as to attempt to excuse the perpetrators - what is the logical implication?  The logical implication is that, in certain contexts (so, why only historical?  why not geographical, social or cultural?) violence - here, murder, torture and rape - can be relativized.  Whenever we are thinking of the public role of history, or of the socially-committed historian (something that the author of this piece not infrequently strikes postures about) it seems to me that implying this sort of thing is the very last thing a historian ought to be doing.  To call this irresponsible would seem to be saying the very least, but I would go further.  I will say, unapologetically, that I find it disgusting.  Why, therefore, we are entitled to ask, is this kind of writing considered unremarkable and acceptable in a book about the ninth century when it would more than merely raise eyebrows if found in a book on the twentieth?
---
(1) You can, if you are interested, find the passages on pp.28-9 and p.47 of this book.  I have not named names directly because, although there is certainly no love lost between me and the author, the point of this blog-post is the general issue of attitude and acceptable methodology - that this sort of writing is seen as generally unobjectionable by the historical profession - not an ad hominem/feminam attack.

(2) My own Viking piece was about why, when the Vikings did the same as everyone else, they got a worse press for it and had demonstrably worse effects.

Thursday, 23 October 2014

Otherness and Identity in the Merovingian Cemetery (updated)

I have updated this post, now with complete footnotes.  Although it did receive a sniffy comment (which had signally missed the point of this piece, the topic and indeed my entire oeuvre), I think that identity and its performance/citation are absolutely central to social dynamics and change, and a key element in how and why I take issue with certain other historians of the period.

Tuesday, 21 October 2014

The sincerest form of flattery...

The other day I was enduring the reading of Lotte Hedeager's Iron Age Myth and Materiality. An Archaeology of Scandinavia AD 400-1000 (London 2011), not something I would generally recommend as it is in my view, to use technical academic language, 'absolute bollocks'.  However, as I was writing up my Style I paper, it had to be done.
 
Anyway, I got to this bit (pp.37-38), which talks about the historiography of the barbarian migrations:
 
 
 
I thought to myself, 'hmmm... all that sounds strangely familiar'.  I am sure I have read that somewhere before.  And indeed I had.  Not least because I had - largely - written it somewhere before.  Here, for your interest and amusement are pp.35-36 of G. Halsall, 'The Barbarian Invasions', in P. Fouracre (ed.), The New Cambridge Medieval History (Cambridge, 2005), pp.35-55.
 


You might like to compare the bits underlined in the same colours, and the bits shaded in in green. 

All I can say is that, if this were done in an assessed piece of student work (undergraduate or postgraduate) it would be seriously penalised for plagiarism.  The occasional general Harvard-system reference, usually sans page numbers, sometimes with other references included, won't cut it.  Now, there are many ways in which two texts can end up looking alike, independently of copying, especially where telling the same short story featuring the same characters etc. (cp. my retelling of Gregory of Tours' account of the Sichar and Chramnesind "feud" in 'Violence and Society...' with Peter Sawyer's in 'The Bloodfeud in fact and fiction': the two are very similar, in spite of me writing that part of my piece before I had read Sawyer's article - indeed I modified mine slightly to avoid being falsely accused of plagiarism).  There are also ways in which very close notes taken verbatim from a source end up being transferred from notebooks to texts without modification.  If it is the latter that is at stake here (it is certainly not the former), then it speaks of a serious lack of scholarly care and attention at best.  Certainly it rather undermines all the efforts to which one goes telling students not to do this, when archaeology professors at Oslo do.

Or perhaps it is time for Routledge to start running the manuscripts it receives through Turnitin...

Monday, 20 October 2014

The Space Between: The undead Roman Empire and the aesthetics of Salin’s Style I

[This is the text read as the annual Sir David Wilson Lecture in Medieval Studies, at UCL's Institute of Archaeology last Wednesday.  Needless to say, most of the great and good, didn't turn out (although it was very nice to see and all-too-briefly speak to Wendy Davies and Susan Reynolds) and I felt the waves of uncomprehending hostility coming from some parts of the room!  It doesn't yet have completed footnotes, and some sections of the work in progress were omitted for timing's sake.  What remains to be added, in particular, is a discussion of the gendering and social contexts etc of the objects themselves.  I will put up the full version in due course.  Much in this version will be familiar from its first iteration but this does, I think (I hope) take the argument in rather different directions.]

First classified by the Swedish archaeologist Bernhard Salin over a century ago,  the major decorative style employed around the North Sea during the late fifth and sixth centuries still bears his name as ‘Salin’s Style I’, though more frequently referred to as ‘Animal Style I’ or simply ‘Style I’.  Two of its principal features are the dissolution and ambiguity of the image. This paper argues that Style I may be understood partly in the context of the dramatic, traumatic fifth century but especially in that of what, in the title, I have referred to as the ‘undead’ Roman Empire.  By that I mean the period between the 470s and the middle quarters of the sixth century when, although the pars occidentis was not functioning politically and its imperial office was in desuetude, it was not known – it cannot have been known – that the western Empire had definitively ‘fallen’.  Awareness of that fact – the ‘fact’ itself – came with Justinian’s ideology of ‘Reconquest’ and, especially, with the wars that followed hard upon it.  In the interim, although the western Empire might have looked dead, it might yet have sprung back to life.  The space between the Empire’s death and its burial the period of the florescence of Style I was a period of undecidability; undecidability is central to understanding Style I.

There has been much thorough and splendid structural and formal analysis of Style I.  Empirically, this paper is based entirely upon these studies.  Less satisfactory than the descriptive analyses, however, are the explanations of how, in Tania Dickinson’s phrase, ‘animal art gained its place in early medieval affections’, which is the question confronted here.  Previous analyses have been constricted within an unsatisfactory and problematic conceptual matrix.  The first half of this paper therefore undertakes some overdue ground-clearing.

One axis of the conceptual matrix just alluded to is the art-style’s description as ‘Germanic’.  Take the following phrase:

‘[Style I’s appearance] is marked by the sudden disappearance of all sea creatures, which up till then dominated Scandinavian ornament and represents the beginning of the Germanic interpretation of the animal world’ (emphasis added)

This begs two crucial questions: ‘why then?’; ‘why like that’?  If the term ‘Germanic’ can perform any analytical work, we are entitled to ask why the ‘Germanic interpretation of the animal world’ in art only ‘begins’ in the late fifth century, when Germanic-speakers had dominated the region for centuries.  Furthermore, why does this art take this particular form after (and indeed before) centuries during which the metalworkers of Germania Magna proved more than capable of reproducing Roman models or coherent figures and interlace?   Appeals to a pan-Germanic cultural ethos get us nowhere in response to either question.  

That the notion of pan-Germanic identity and ethos emerged in the precise, contingent circumstances of sixteenth- to (especially) nineteenth- and twentieth-century German politics is well established. Yet, such appeals to ‘Germanic’ culture remain as common as ever.  

Another fairly widespread claim is that Style I was a badge of a ‘shared Germanic aristocratic identity’.  When applied to decorative art, this argument is circular.  Style I’s popularity is explained because its ‘Germanic’ nature appealed to the ‘Germanic’ social élites who sponsored its production.  For the reasons previously outlined, the ‘Germanic’ label can bear no analytical, or even descriptive, weight, either for the art or the people.  That a common élite identity existed amongst late fifth- and sixth-century Germanic-speakers finds no more support in any written data, from the regions allegedly encompassed by this shared aristocratic culture.  This explanation has no empirical grounding, so the shared ‘Germanic’ élite ethos has to be extrapolated, in reverse, from the distribution and popularity of the art style.  And so on... If there were any area where a common, non-Roman, military identity might be employed to unify Germanic-speakers of diverse cultural, geographical and familial backgrounds, it would not be Germania Magna but, ironically, the Roman Empire itself, where fifth-century politics were increasingly focused around military leaders of at least a claimed non-Roman origin.  The material culture that did come to be employed to unify that disparate élite group originated in the Mediterranean, not the Baltic.  The term ‘Germanic’ has no rigorous analytical meaning or value when applied to fifth- and sixth-century decorative styles.  It is an empty placeholder.

The other component of the analytical matrix is religious.  Style I imagery is frequently read according to ideas drawn from Norse sources from (at the earliest and most optimistic) half a millennium after the appearance of this form of decoration.  These are then read through anthropologically-derived ideas of shamanism and tribal ritual.  It must never be forgotten that the principal sources upon which any view of Nordic paganism is based, the Prose and Poetic Eddas, were both written down in earlier thirteenth-century Christian contexts, the former by Snorri Sturlusson.  The extent to which they represent earlier texts (written or oral), the faithfulness with which they do so, and how much earlier any such texts can be dated are all matters for guesswork, not fixed analytical starting points.  This approach constitutes a retreat from, simultaneously, any attempt to read material culture on its own terms and from any effort to face up to the possibly unpalatable possibility that the signification of such evidence may no longer be accessible, and into a simplistic but convenient application of later textual sources.  The supposedly ‘theoretical’ attempts to justify the approach are philosophically and methodologically incoherent.

The ‘pagan’ axis of Style I interpretation assumes that the image of pagan belief obtained from these texts can be applied, in detail, to artwork no less than 750 years earlier.  However, between the late fifth and the late twelfth centuries artistic motifs changed dramatically in the Nordic world.  Style I gave way to Style II, Style III, and then the array of Viking styles – Oseborg, Borre, Jelling, Mammen and Ringerike – before, with Urnes Style (in its early, middle and late forms), we arrive at the end of the Germanic/Viking Animal Art ‘era’.  Even at Urnes Style’s demise we still have fifty years or so to wait before (in a world, remember, where none of the art supposedly representative of these beliefs had been produced for two generations or so) Snorri wrote down the Prose Edda.  While Scandinavian decorative art between c.475 and c.1170 went through at least nine transformations of sufficient importance to necessitate a change of art-historical categorisation, the mentalité it represented is supposed somehow to have drifted along unchanged beneath this turbulent surface of actual, documented expression.  The difference between the supposed stasis of Scandinavian pagan belief and the dynamism of its artistic form is made yet more problematic by the paradigm itself, which sees Style I’s emergence (surely correctly) as symptomatic of important social and cultural change.  If that were the case, then something similar surely lies behind the many subsequent changes of style.  Scandinavian archaeology, reveals important, dynamic socio-economic change throughout the second half of the first millennium.  Against this background, an entirely stable set of religious beliefs and practices defies credibility.

Further, if Style I’s content is religious, does its difference from earlier art mean this religion only emerged c.475 and, if so, why?  Or, if the religious beliefs were older, why did art not manifest them before the emergence of ‘Germanic animal art’, or why did it represent them in such different ways?  These problems were encountered with the ‘Germanic’ construct, and they are equally fatal to the approach.  In answer to the questions of ‘why then’ and ‘why like that’ the religious axis takes us nowhere just as quickly as the ‘Germanic’.

The strongest argument in favour of the approach is the depiction, on metalwork contemporary with and related to Style I, of characters and episodes identifiable, with varying degrees of plausibility, as those mentioned in the Eddas.  This raises two problems.  A fourth-century depiction of Christ at, say, the marriage at Cana might be recognisable as such to a twelfth- or thirteenth-century western European Christian.  We would quite wrong, however, to assume on that basis that thirteenth-century Christian theology, practice and organisation could be applied in any detail to the church of c.400, or even of c.900, and its artwork.  In that instance, moreover, a series of more or less canonical written texts existed, to anchor beliefs to some extent.  It is noteworthy, furthermore, that the earliest runic texts actually to mention the Nordic gods come from slightly later than the period of Style I and frequently from further south, in ‘Alamannic’ regions.

These analytical axes are sustained by a raft of similarly evidence-free, mystifying ideas about the artefacts’ (or their makers’) ‘magical’ qualities and by ideas of ethnicity, migration and of a rigid Christian-pagan divide which are inadequate to the task. No prima facie evidence exists for the notion that the animals and other design elements of Style I or its precursors (and immediate successors) were religious or apotropaic and, although it remains a possibility, the endless reiteration of the interpretation does not make it a fact or even a solid basis for discussion.  There are indeed fairly good grounds – Style I’s very wide geographical distribution and the cultural and indeed religious diversity within that spread – for seeing it as quite implausible.  The ‘Nordic/Germanic pagan’ interpretation is essentially a self-reinforcing matrix of a priori assumptions.

Here, it is important to confront the predictable objection that an approach drawing on Derridean philosophy and Lacanian psychoanalysis is somehow ‘anachronistic’.  It should be admitted that the method used here might be based upon a problematic claim to being achronic; that there lies at its base an assumption that a particular psychoanalytic approach has timeless applicability, or that the effects of, for example, entry into the world of language, are of eternal value.  That criticism is entirely valid.  However, by limiting the analysis to data and documented circumstances of the late fifth and sixth centuries in areas where Style I was popular, rather than invoking notions from questionable sources written centuries later or assuming a pan-Germanic ethos that applies across time and space, this methodology is demonstrably less anachronistic than those currently employed.  More importantly, perhaps, Style I will here be analysed according to its observable effects on established artistic traditions.  It is thus more rigidly contextualised than ‘Germanic’ or ‘pagan’ readings.  This paper will not discuss Style I in terms of religion, iconography or function.  Instead, it offers a tentative interpretation of Style I’s aesthetic.  Why did late fifth- and sixth-century people like this style, as they clearly did?  What drew them to it?  What appealed to them about it?

There is of course no probative answer to such questions but addressing them may offer a more profitable means of using decorative style to think about the changes that occurred around the end of the Western Roman Empire.  To allow fifth- (as opposed to twelfth-, thirteenth- or nineteenth-) century people to speak, to listen to them as far as we are able, in order that their experience can be thought through to help us act in the present, the starting point must be their own ‘texts’ and concepts, not those of later centuries.

It is methodologically important to recognise that decorative composition shares crucial features with writing; it is écriture in the Derridean sense.  Meaning depends upon presence, absence and juxtaposition, along endless chains of signification.  It emerges, however, from that background of différance, that structuring trace, that ‘beginning space’, without which no form of signification can function.  This is essential to my reading.  It serves an additional purpose in (to some extent, but only to some extent) breaking down the difference between the documentary and the archaeological.

In the case of Style I art, interpretation frequently begins with the statement that the barbarians lack written sources and thus these decorative systems are especially important.  The objects themselves serve no more purpose than to have the notions discussed above mapped onto them.  Quite how (or whether) the analysis would differ if the style took a different form, or indeed how the social analysis of Scandinavian society c.500 would be altered if these objects did not exist at all, is unclear.  The published studies permit us to doubt that any significant difference would have been made.  The stories and ideas drawn tendentiously from the Eddas would, we may legitimately postulate, have simply been mapped onto, and illustrated in different ways through, other art styles.  We can reasonably imagine that the social analyses would remain the same even were there no decorated objects at all, as the range of articulate evidence employed would remain entirely unaltered.  The clear implication is that the dynamic, conscious and articulate changes made by contemporaries in the way they represented the world, religion, society and ideas count for nothing compared with a handful of written texts whose precise applicability to the mentalités of any specific part of the period between 475 and 1220 is uncertain.

The approach seems to be motivated by a concern that archaeology will have nothing to say without importing the Eddic material.  This unnecessary counsel of despair grates when set alongside two common tenets of modern ‘theoretical’ archaeological writing: that the material cultural record is as articulate as the written, and that interpretation of the excavated data should not be driven by textual sources.

The current paper is grounded in untestable hypotheses; that much should become clear and, if not, be made clear at the outset.  However, it proceeds from data’s observable features and links them to concepts drawn from contemporaneous written data.  It endeavours to allow the period’s decorative art to contribute actively to the understanding of society and its response to the political change.

The foundation of my analysis is that imperium and barbaricum were not two opposed, antagonistic worlds but core and periphery of a single orbis romanum.  Changes within the Empire had deep effects on barbarian society and politics.  The North Sea was a cultural province, with movement around and across it – in all directions – throughout the late imperial period.  Furthermore, the overwhelming balance of cultural influences was from the Empire to barbaricum: in terms of pottery, metalwork, cultural forms (inhumation), and artistic style.  The adoption of Roman artistic motifs by the craftsmen of Germania Magna had a long history.  Throughout the Roman Iron Age, brooches were imported into trans-Rhenan barbaricum and copied.  Roman style had a profound influence on northern art up to and throughout the fifth century. Two examples will suffice.  At Fallward, on the north-western coasts of trans-Rhenan Germania, a lavish burial was deposited around 400, which included a wooden chair that has been thought of loosely as a ‘throne’.  This was clearly of some significance, given its public deposition in the burial, and is decorated in a geometric style modelled upon the ‘Kerbschnitt’ (chip-carved) style of late Roman official metalwork. At the other end of Germania Magna, in Alamannic territories, local leaders produced copies of imperial Roman brooches for distribution to their followers, presumably as signs of the latter’s connection with the centres of power.  In explaining Roman influence upon the craftsmen who produced the immediate precursors of Style I, therefore, it is unnecessary to invoke, as Haseloff did, the kidnapping to northern Germania of entire workshops of Roman artisans.

Northern Gaul’s influence upon the coastal areas of north-western Germany and Scandinavia is especially important.  Fourth-century northern German cremations contain significant numbers of items of imperial metalwork, whether belt-buckles and other fittings or, less commonly, Zweibelknopffibeln: brooches used as rank insignia.  The brooches traditionally employed to suggest ‘Germanic’ settlement of northern Gaul share this distribution and some decorative elements and closer study of their typology suggests that they too originated in northern Gaul and were exported to, and copied in, Germania Magna.

To understand Style I further we must back-track into late imperial art and mentalité.  The centre-point of the Roman thought-world was the idea of the civic Roman male, embodying ideas of freedom, the law, reason, and moderation.  This ideal was, in Lacanian terms, the point-de-capiton, the quilting point, of the whole signifying system: the master signifier which provided all the others with their precise meaning. The point-de-capiton retroactively fixes other potentially shifting signifiers and oppositions.  Concepts such as womanhood, barbarism, the animal, freedom and so on, all acquired their meaning by reference to this. Furthermore, a socio-political actor always sees him- or herself as in the gaze of the imagined ‘Big Other’ (the irreducibly ‘other’; loosely, the signifying order, idealised social structure, society-in-general) and judges his or her own conduct to some extent in that light.  Even the martial model of Roman masculinity, which emerged during the fourth century, was governed by this.  Crucially, the domination of Roman ideas of power over society and politics in barbaricum by the fourth century implies that these features probably exercised important influence there too.

The civic masculine ideal lay at the heart of, with its performance required for participation in, all Roman politics, whether at the local or imperial scale, and beyond.  The libidinal motion of subjectification was always towards this central ideal, which of course could never entirely be reached.  At the heart of the system, the Emperor and those who held power at court held enormous power, obviously, but not least in being able to proclaim a person’s closeness to or distance from the ideal of the Roman male by distributing the signs, material and other, of acceptance into the world of legitimate power.

The depiction of the human figure can never be a simple representation of a bipedal hominid but in late imperial art this is especially true.  Its enormous signifying burden might be visible in the changes to figurative art in the late imperial period.  As has long been noted, the figurative art of late antiquity is characterised by a less naturalistic, more stylised representation.  What happened to the figure in parts of the decomposing imperium Romanum cannot therefore lack significance.

As has long been known, the decoration of official imperial metalwork was, partly via Nydam Style, Style I’s lineal ancestor. It followed strict rules.  The centre of the design is always made up of geometric or plant-based designs, very regularly set out.  Animals, depicted naturalistically in semi-plastic form (contrasting with the deeper ‘chip-carved geometry of the central sections) in spite of the fact that they are usually mythical hybrids, are only found on the edges. It seems legitimate if, obviously, uncertain to see this opposition between centre and periphery as equating, not least in its layout, with other crucial binaries: the regular and the disordered; the natural and the unnatural; the civilised and the uncivilised; the human and the mythical; the cultivated and the wild; and so on: in short, perhaps, the Empire and barbaricum.  This interpretation of the scheme as a metaphor for the imperium Romanum might be underlined by looking at other late imperial art, especially that which represents the emperor and barbarians, like the Column of Arcadius in Istanbul.  Too much should not be read into the boundary between centre and edges, which is unifying as well as distinguishing, and in fact essential to the signification of the whole. This precise reading, whether or not it is accepted as plausible, matters less to my argument than the fact that the style, its motifs and strict rules of composition, as well as its symbolism (whatever that may have been – but some connection with the Empire surely featured) were well-known, respected and in some way appreciated across a wide swathe of north-western Europe, where Style I became popular.

The motifs on the edges of the design and those in the centre signify not in and of themselves but by virtue of their relative position.  The boundary or, better, the marker of the space between centre and edge, therefore, so far from dividing those components, is actually the element most constitutive of meaning.  These points are emphasised by the fact that this decorative art, visible on artefacts of many types in the imperial north-west, is characterised by – in addition to the regularity just discussed – its unambiguity and its timelessness.  In general, its designs can only be read one way.  A beast with the rear portions of a sea monster and the front portions of a horse cannot be seen as having the front portions of a sea-creature and rear portions of a horse, or as being composed of other animal or human parts.  The lion with a dragon’s head at the tip of its tail found on the well-known buckle from Aquileia is unambiguously that.  The dragon’s head cannot be understood as the front of the hybrid, even if one takes the lion’s tail as its neck; the naturalistically-depicted legs bend the wrong way.  Even a more symmetrical two-headed animal can only be read one way.  However one might quibble about which end was the front, it is unambiguously a two-headed animal of a specific type.

The art’s spatial and signifying regularity gives it its temporal, or rather atemporal, dimension.  Regardless of when (as well as from which direction) it is seen, the overall picture may usually only be read one way.  Meaning is not contingent upon the viewer’s participation; it requires the presence of neither the artist nor the intended recipient.  It has no ‘active present’; this will become clearer in considering its opposites.  As with any text, the image may still be understood, overall, in different ways.  If, for example, one accepts that the beasts on the edge represented the barbarian and the geometric centre signified the Empire, the former might nonetheless have been understood as simply peripheral, or as besieging and threatening, or as guarding and protective.  In other words their meaning may be valued or devalued, vis-à-vis the centre, but is still contingent upon that spacing, upon the precise and regular juxtaposition of elements.  The design remains a constant, even if it contains no self-present truth.  In all this, the decorative style of imperial metalwork has the features of écriture, as discussed earlier.

There is one possible – and important – exception to this rule.  Sometimes the geometrical arrangements in the centre of the field can be perceived in subtly different ways if the object or the viewpoint is rotated slightly.  The pattern in the centre of the belt-plate from Maxglan, for example, can be seen as four peltas arranged in a cross or, rotating the viewpoint by 45o, as a floriate cross.  Only one reading of the pattern is possible at once.  Clearly, in examples like this, the atemporal aspect of the artwork is undermined.  The extent to which this ambiguity is deliberate is unknowable but it would seem rash to dismiss it.  Jane Hawkes plausibly suggests, in relation to later art, that ambiguities of the type discussed encourage reflection or contemplation of the cross-form that is here common to both readings.  In the Christian Empire this possibility seems attractive.  It may be the case that one reading – that which presents itself as most obvious when viewing the object along its principal axes – is intended as dominant or primary.  Nonetheless, the space between the two interpretive options – this space of fundamental undecidability – will become a central feature of Style I.

Artistic style is one of many areas where fifth-century evidence tells a different story from that adopted in modern narratives, especially those attributing fundamental causal importance to barbarian migration.  What is clear for most of the century is not the gradual spread of artistic influences from Germania Magna into the provinces but, rather, the continuing grip in which the rules of imperial art held decorative expression.  In north-western Europe, within and without the increasingly permeable frontiers, fifth-century decoration continued to play within the guidelines of imperial ornament.  Chip-carved styles generally perpetuated the rules of composition.  ‘Quoit Brooch Style’ has long been known to do this.  ‘Saxon Relief Style’, similarly, is almost entirely bound by these rules.  Peter Inker has argued that it shows a vigorous ‘Germanic’ reworking of Roman models.  What exactly is meant by ‘Germanic’ is again unclear at best and problematic at worst but, in any case, there seems little that can really be labelled a ‘vigorous reworking’ of the imperial decorative grammar.  It is rather the fact that ‘Saxon Relief Style’ was entirely compliant with the latter which explains its wide adoption around the North Sea in the mid-fifth century.  Possibly, it consciously expressed some kind of political opposition to the ideas and identities represented by other Romanised decorative systems, like ‘Quoit Brooch Style’.  This point bears importantly on discussions of art and identity in fifth-century Britain.  There is nothing about ‘Saxon Relief Style’, any more than ‘Quoit Brooch Style’, that would shock a provincial Roman concerned about properly expressing claims to legitimate authority through ornament.  This might have made it entirely appropriate in the Romano-Saxon polities emerging in fifth-century Britain. Both may be said to continue the tradition long visible within and without the limes whereby, especially when unavailable directly from imperial sources, copies of the artefacts held to imbue the prestige of the Empire were manufactured.

Whatever its stylistic genealogy, though, Style I is quite different. Usually the animals take over the centre of the field although, especially outside the North Sea region, some Style I objects remain within the old compositional rules.  The extent of change in the nature of the animals may be overstated; quadrupeds feature in the immediately preceding decorative styles and in late Roman imperial metalwork.  More interesting is the Style I beast’s well-known incoherence, fragmented into different components, to appear in its extreme form in what Haseloff memorably called Tiersalat: animal-jumble, a mess of animals.  The animals’ bodies lose their edges, being reduced to a series of parallel contour lines and sometimes to a single line.  This is compounded by the common ambiguity of Style I beasts, which can terminate, when looked at one way, in a human head or, when viewed another way, in two confronting human heads, or in beasts’ heads, or as a single face mask, sometimes with knock-on effects for how the rest of the body is read.  The figure, in its coherence or interpretative clarity, has gone.  Elsewhere within the repertoire of the Style we can see, amongst other beasts, what might be visual trickery or play; what look like elements of bodies disappear or lead nowhere on closer inspection.

The relationship between object and viewer is crucial to the production of meaning.  In imperial metalwork the stable (or relatively stable) element was the object, the text; with Style I the roles are – if anything – reversed.  The observer actively creates meaning from the unravelling of the animals and, where necessary, the decision between different interpretive possibilities.  This is the style’s ‘active present’. The beasts can only be one thing at once but what that thing is can change through time, whether between viewers or with the same viewer looking again. Vitally, the aspects of space, difference and atemporality are undermined.  This implies an end to the ornament’s narrowly ‘textual’ element.  There seem, in other words, to be two aesthetics at work in the styles that conform to imperial decorative grammar and in Style I.  In the artwork based around imperial decorative grammar what seems to dominate is a fixity of symbolic content; in Style I, by contrast, the free play of the signifier takes over.

However ambiguous or undecidable Style I may be, it could nevertheless only play its semiotic games against the background (and stylistic rules) of chip-carved imperial metalwork, familiar throughout the areas of its popularity.  The animals’ central positioning and different form and the new ways in which they are carved only signify by virtue of their difference from the preceding system, to which they make constant reference.  The ambiguity stressed in Style I art may develop and exaggerate, but via different components, that which already existed within the geometries of imperial decoration.  The rules of that earlier style gave the elements of Style I their signifying capacity but it can equally be said that what Style I does, simultaneously, is actually bring back to the surface, and swirl about, the structuring ‘trace’ that lay behind the decorative grammar of chip-carved ornament.  However, to make a point by breaking rules, the rules have to be known; the rules are referenced in their transgression.  That means that it is quite wrong to see Style I as leaving the orbit of imperial decorative grammar, or as signifying entirely on the basis of its alleged non-Romanness.  Just as a political stance based upon barbarian ethnicity necessitated acceptance the rules of Roman ethnography and stereotyping, if Style I plays with (if not necessarily within) the rules of imperial chip-carved style it cannot stand outside them.

At the basis of all attempts to explain a decorative style’s popularity in the pre-industrial world must be the question of how (as well as why) it came into being and was disseminated.  Charlotte Behr has addressed this issue and her hypothesis is, on its own terms and within its paradigm, rigorously constructed.  She points out the existence of key sites around the Baltic where political power seems to have been concentrated and where metalworking was carried out, frequently in precious metal and often in forms, and with decoration, that had some political significance.  That fundamental element of Behr’s argument appears plausible.  If one accepts – as seems reasonable – that these sites were foci for political gatherings then one can imagine a metalworker producing a new design, playing with well-known motifs and decorative grammars in a way that had contemporary resonance and being asked to produce similar pieces.  The craftsman might travel to other such centres and, as the design became more popular, it might well be copied and developed by other metalworkers.  Eventually, by the earlier sixth century, it seems that potters too were decorating some wares with motifs from the same general repertoire. That the style clearly and perhaps playfully reworked design components with political significance seems to provide a more satisfactory context for its geographically widespread popularity and development than an explanation pinning it to specific religious ideals.  Behr gives us a plausible mechanism for the dissemination of the style.  This allows us to envisage a specific artist or group of artists creating, developing its elements and responding to its increasing popularity and expanding audience.  This is important.  A weakness of current readings is that they appear to see decorated objects as a kind of photographic paper somehow passively absorbing, fixing and making visible the latent image of a society’s religious or cultural ideas.  Some writers are particularly clear about this.  A leading student of Style I, Alexandra Pesch, possibly letting her guard down in a more accessible on-line publication, goes so far as to state that:
'Sie war nicht freie, individuelle Gestaltung, sondern gelehrte Anwendung von allgemein gebräuchlichen, motivischen wie technischen Grundlagen und Stilkriterien, die jeder Kunstler nur wenig variieren durfte.’

All this in spite of the two substantial volumes of catalogued tweaks, variations, developments and ‘deformations’ brought together in Günther Haseloff’s monumental survey of Style I.  This in spite of the fact that, where two brooches look alike, the inevitable (and plausible) explanation is immediately offered that they originate with – or close to – the same craftsman.  As with the pagan interpretation discussed above, such ideas have enormous difficulty coping with this sort of well-documented stylistic dynamism and variety. I propose that, instead of continuing to assume this heteronomy for early medieval art, we allow our South Scandinavian artists the same creative agency that we are accustomed to accord the authors of written texts.  We have long been used to accepting that many authors of the fifth and sixth centuries not least Augustine, played with as well as within stylistic rules to create sometimes subtly, sometimes radically, new works with enormous influence.  The fact that the names of whoever first decided to play with the rules of imperial decorative art to create Style I have been forgotten is no reason to deny them the same active role.

Style I, with its lack of resolution, is an ‘art of beginnings’ and indeed shares many of what, in The Century, Alain Badiou identified as features of the twentieth-century avant-garde.  It also shares features with other radical art movements that emerged in contexts of political upheaval or uncertainty, most notably the Jena Romantics of the very early nineteenth century (perhaps most famously Friedrich Schlegel), whose stress on the fragment constituted a key element of their work, or the aesthetic ideas of the Young Hegelians, slightly later.  Clearly, although the statements just made come perilously close to it, the idea that there might be some sort of algorithmic relationship between political turmoil and change and art with particular features would run quite contrary to this paper’s argument.  Nonetheless, some of these similarities of concept and context seem worth exploring. My reading of the mechanism suggested by Behr permits me to put a little more flesh on this suggestion and a little more solidity to a notion giving significant, contingent, creative agency to some specific, sadly nameless, late antique artists.  This nevertheless leaves open the principal question this paper set out to confront: why should Style I have looked like that at that point in time and why should it have been so popular?

The key starting point in suggesting an explanation for Style I’s popularity is the date of its appearance, around 475.  This chronological conjunction with the western Empire’s final political disintegration cannot be mere coincidence.  Like others, I have previously dwelt on the allegedly non-Roman import of Style I – its supposed breaks with Roman decorative tradition and its southern Scandinavian origins.  As I argued above, I now think this reading is mistaken but the fact that a new style should appear at the time of the Empire’s political demise is nonetheless significant.

It is unsurprising, against the backdrop of imperial influence (especially from Northern Gaul) upon the various regions of Germania Magna sketched earlier, that the withdrawal of effective imperial governmental presence, beginning in the 380s, should have caused serious crisis in North Sea barbaricum as well as in the north-western provinces.  This is manifest by analogous responses and material cultural forms that emerge in the decades around, and especially after, 400.  Long-occupied settlements were abandoned in the villa-zone of lowland Britain, in northern Gaul and in the ‘Saxon homelands’.  Replacing older habitations, a new form of settlement appeared, often looking less orderly than its predecessors, based around combinations of post-built ‘halls’ and clusters of ancillary buildings, most famously semi-subterranean Grubenhäuser.  In all three areas – lowland Britain, northern Gaul and coastal north-western Germany – changes in burial also occurred.  Again, these encompassed the abandonment of long-used sites and the foundation of new cemeteries.  New burial rites began to be used, most notably furnished inhumation.  This occurred earliest in northern Gaul – structurally the core of the North Sea province – and slightly later in Britain and northern Germany.  In this ritual the dead were interred accompanied with more grave-goods than had previously been employed, and frequently of a rather different form.  In male burials these encompassed items of official metalwork, like those found less often in Saxon cremations, and sometimes weaponry.  Female graves manifest their clearest difference from preceding burials through the fact that the deceased was interred dressed in a costume fastened and adorned by a range of brooches and other elements of metalwork.  These are sometimes decorated with some of the motifs found on the official belt-sets.  As yet such burials occurred mostly in small clusters within large necropoleis.  Finally, in several areas under discussion, increasing palaeobotanical data show reductions in the areas of managed farmland.  It is important to view these changes in terms of an exchange of ideas and influences around the North Sea, in response to a shared crisis, rooted in the same changes in imperial politics, rather than as evidence of migration.  When one sees the analogous structural changes occurring around the North Sea and indeed around the northern frontiers of the Western Empire the limitation of explanations to those dependent upon a narrative drawn from later written sources becomes questionable.

Two points must be made immediately to qualify this argument.  One is that by the time that Style I became popular, archaeology suggests that the worst of the crisis in what we might call the ‘Saxon Homelands’ had been weathered, although it now hit previously more stable areas further south, along the old limes.  The other is that in the regions where Style I originated, rather than where it became popular, there seems to have been little or no serious fifth-century crisis in any case.  In the fourth century, political authority in the region seems to have used the ability to distribute Roman imports, entering the region through controlled ports of trade like Dankirke  and Lundeborg as one of its bases.  If anything, however, the archaeology of the region suggests that such authority had developed its own underpinnings that enabled it to weather the disruption and shifts in those patterns of relationship.  It may be that it was in this period that the Danes emerged as a growing political power in the region.  The Danes’ first entrance into history, in Chlochilaic’s raid on Francia, is one manifestation of this.  That, however, is not to suggest that the crisis of the Empire had gone unnoticed in southern Scandinavia, or indeed that the Danes’ expansion was not facilitated by crisis elsewhere.   Nydam Style, often seen as Style I’s immediate progenitor, emerged at this time and lasted throughout the first half of the century.  But it is important to note that Nydam Style plays entirely within the imperial rules of decorative grammar.  Rather than some sort of awakening of ‘Germanic Art’, it seems more plausible to view the appearance of Nydam Style – and Saxon Relief Style - as yet a further instance where the political leaders of barbaricum and their craftsmen responded to a failure to obtain the usual (Roman) signifiers of authority by making their own versions.

In lowland Britain those tendencies had been manifest since the second quarter of the fifth century, in the use of the Quoit Brooch and Saxon Relief Styles.  Along the Rhine there was some turmoil as the effects of the absence of Roman subsidies and regular diplomatic relations, which had perhaps enabled the frontier kings to weather the storms of the early fifth century, made themselves felt.  In Northern Gaul a number of competing groups strove for dominance.  The group which eventually triumphed was that based around the Frankish commanders of the Loire army, later known to history as the Merovingians.  But if Clovis’ burial of his father Childeric, the founder of the dynasty, tells us anything, it is that his power was symbolised at least as much in fairly traditional Roman fashion as in tweaks on that and in new elements.

This is where we return to the concept mentioned at the start, of an ‘undead’ Roman Empire.  It seems clear to me that people in the 470s knew that the Empire had ceased to function, that, to continue with the mortuary metaphor, its body had stopped moving.  Yet it seems quite unjustifiable to make the move from that point, which I and others have made in the past, to assuming that they knew what we know: that the Western Roman Empire had ‘fallen’.  Had we an equivalent of Sidonius Apollinaris writing under Postumus in the 260s, it seems to me quite likely that he could have used precisely the same metaphors as those employed by Sidonius when he wrote to Arbogast of Trier two centuries later, about a vigorous Moselle replacing the Tiber’s dwindling stream.  The Empire had recovered from the hugely serious crises of the mid-third century and there was no way that Sidonius or anyone else could know that it would not at some point come back from the (seemingly) dead in the late fifth.  Indeed, around 510, when Theoderic and Clovis, both of whom allowed themselves to be addressed as augustus, were squaring up against each other, it might have seemed that after a generation of abeyance the characteristic fifth-century conflict between Gaul and Italy for dominance of the Western Empire was finally gearing up for its decisive show-down and that whoever won would carry off with it the title of Emperor.  Of course that never happened, but it might have done, and the awareness of the fact that it might have done might have led Justin I and Justinian I to instigate their ideological offensive about the loss of the West, which finally pronounced the Western Empire dead.

What we have in the last quarter of the fifth century is a situation in which new powers were coming to the fore, and/or where some stability was returning after earlier confusion or where competition and instability still reigned, but all in areas which had been for centuries dominated by, and where the stability of social and political order had largely been maintained with reference to, a particular form of authority and its expression.  Now the bases of that authority and the references points for its symbols were in serious doubt.  It is this which unites the diverse areas wherein Style I became popular, in which we should also include the Upper Danube, rather than some sort of shared pagan religion or Germanic identity.

I would like to suggest that this explains, and accounts for the popularity of, Style I’s key features. Historical narrative is structured like a language.  Narrative selects events, whose meaning and significance emerges from their juxtaposition with the other events chosen, and their precise linguistic signification.  The space between them is closed up but simultaneously forms the ‘trace’ that gives the events their meaning.  As lived experience, though, those spaces are ones of radical chance and indeterminacy, of infinite possibility (something which is not reassuring).  To encounter what might be called the chronological ‘Real’ – a passage time without signification, time which has not yet been recouped and closed by the symbolic – is surely traumatic or at least profoundly unsettling.

This gives us various possibilities for the aesthetic appeal of Style I, which I would like to explore in the last part of my paper.  In accordance with the rules of Style I, however, I do not wish to present one or more as ‘correct’.  They may all be there or just some, or a combination.  They may be there simultaneously while being mutually incompatible with each other.  This of course allows me to present the fact that my lecture doesn’t have a proper conclusion as a frightfully clever stylistic move.
First, one might see Style I as having significant metaphorical value.  The take-over of the centre of the field by the periphery can be read as metaphor.  If my analysis of the symbolic associations of imperial metalwork is not pure fancy, one might see it as metaphor for the control of the political centre by the peoples once regarded as peripheral animals.  I suspect, however, that this is too easy a reading.  If we wish to see Style I as metaphor, I would prefer to see it as representing the absence of the old imperial centre.

Another useful way of understanding the process at work might be provided by two quotes from Judith Butler:

‘One might speculate: the act of symbolization breaks apart when it finds that it cannot maintain the unity that it produces when the social forces it seeks to quell and unify break through the domesticating veneer of the name.’

‘When people see the schema used to justify domination the dialectic collapses’.

The fifth-century master narrative is that of the collapse of an age-old signifying system, as the political centre that served to maintain and regulate it lost its hegemony – both in the usual and in the Gramscian sense of the term.  In this interpretation, the point de capiton – the old master-signifier – the Roman civic masculine ideal, which had symbolised the social structure and concealed internal divisions within a set of binary polarities based around it, became unfixed.  That would illustrate Butler’s point.  Once that happened all the other signifiers and oppositions began, like Style I animals, to float free again.  The problem I have with this reading is that it assigns too clear a vision to the creators of and participants in Style I art.  But perhaps some people did read Style I in that way.
As the fifth century wore on, but especially from its last third or quarter, the West – the North-West in particular – was entering a new world, one without any of the old symbolic fixed points.  Everything was up for grabs.  Social structure was unstable, authority at local levels as well as those of the new kingdoms could be created, lost and won bewilderingly easily, quickly and unexpectedly.  Social relations were being renegotiated, often dramatically, in ways that could not have been envisaged a hundred years previously.  As mentioned, even fairly stable areas like Denmark nevertheless felt keenly the demise of the great imperial power at the centre of the European political world, which had served to keep everything in its place.

So this was a world of permanent beginnings.  Great kingdoms rose, and fell, within a couple of generations.  Local power seems to have changed hands equally swiftly as a result.  It was a world in permanent encounter with that which could not be symbolised, which indeed was something pre-symbolic.  How to symbolise, even retrospectively, events with no precedent?

In this context it is, it seems to me, hardly surprising that the human figure ceases to be depicted in anything other than (at best) stylised and (usually) ambiguous form or that even the animals show these characteristics.  If the opposition between imperium and barbaricum had been mapped onto one between human beings and animals, something which cannot have passed unknown in the ‘barbarian’ world, then the ambiguity between the human and the animal in Style I is highly appropriate in Western Europe after 476. The undecidability of the options may have been its attraction.

I suggest that part of the attraction of Style I, part of its metaphorical content, might be that these humans and animals appear and disappear; can be represented as different things on different days, rather like (it seems) people’s identities in the fifth century.  Or that what seem to be the beginnings of figures or animals – human hands, animal eyes – turn out to lead nowhere.  The playful character of Style I – the way these act as visual riddles – also requires attention.  If the fifth century was a traumatic time, and if the period after 476 brought traumatic encounters of a different sort, then we should note that one means by which people have coped in other traumatic periods has indeed been in more playful art.

That playfulness to some extent relies upon the rules of imperial decorative style, as was discussed earlier.  It has swapped things around.  Some Style one objects have areas that may be animals but equally may be fragments of geometric or floral design.  It occurs to me that discussing issues like this in terms of misunderstanding or less than competent rendition.  Indeed, if one analyses Style I on the sorts of terms suggested here, then I think it might be possible to discuss its regional variations in terms other than those of purity and degeneracy; pure originals and ‘corrupted’ copies.  This in turn may call into question some ideas about relative chronology.

The first time I showed students how to identify Style I animals, one of them actually did ask me if I had been smoking anything before class.  It is an entirely valid response to Style I; this, if ever there was, is an art of the ‘what the hell is going on here?’  Set against the narrative of the fifth century, placed in this context, Style I reveals a true contemporary resonance and aesthetic; in short, set against its late fifth- and early sixth-century background, Style I is unambiguous: it makes perfect sense.