Francis’s news feed

This combines together some blogs which I like to read. It’s updated once a week.

April 22, 2017

Bare Lit 2017 ??? Going beyond the token writer of color by The Leveller

Taking place over two days, 22nd and 23rd April at Toynbee Studios in East London with a launch party on the 21st at St Ethelburga’s Centre for Peace and Reconciliation, Bare Lit brings internationally established and emerging writers based across the UK together to share works of fiction, non-fiction, poetry, playwriting and journalism.

Panel discussions explore a multitude of topics, from how writing by women is often marketed as writing for women to a new wave of radical, activist and community print publications to sex and erotica. They touch on the past by questioning how writers present and preserve histories of culture, food, music and community and look to novel mediums, from performance poetry to play- and screenwriting and there’s much more on offer.

Bare Lit was founded last year by people involved in Media Diversified (MD), a non-profit organization publishing analysis, stories and cultural content by writers in diaspora. The festival’s founding team include MD founder and filmmaker Samantha Asumadu, CEO of MD Henna Zamurd-Butt and Mend Mariwany who now works with Skin Deep Magazine. The inaugural festival was also supported by writers Courttia Newland and Sunny Singh noting they were “rarely invited to literary festivals”.

Bare Lit’s second festival comes amid a swathe of publications working to raise the profile of talented writers of colour and address the ongoing lack of representation across both literary and media platforms more broadly. The award-winning platform for women of color gal-dem released a print magazine last year. Nikesh Shukla’s edited collection The Good Immigrant brought together 21 writers to “explore what it means to be black, Asian and minority ethnic in Britain today”. Sabrina Mahfouz edited collection The Things I Would Tell You brings together literary heavyweights to emerging spoken word artists to “blow away the narrow image of the ‘Muslim Woman’”. The first annual Jhalak Prize, co-founded by Nikesh Shukla, Sunny Singh and Media Diversified addresses the lack of recognition for talented writers of color. These are just a few.

Ahead of the festival, we caught up with Directors Henna Zamurd-Butt and Mend Mariwany, as well as hearing from a range of panelists involved in the weekend.

Bare Lit seeks to not only challenge issues of inclusion and representation, but also what kind of representation that is, what writers of color can or are able to discuss at literary festivals.

Zamurd-Butt – “Our friends, many of whom were successful published writers talked about how they didn’t get invited to literature festivals – and when they did it was as the token person of colour. So they ended up having to talk about race, identity and diversity rather than their own work and interests. Bare Lit was born to change that.

Our ethos is one of celebration of artistic excellence – as such it isn’t a place where we talk about diversity or inclusion. Instead we choose to focus on the writing, the mediums, the genres and themes. If the festival wasn’t organized by people who share common experiences with the speakers – those of living in diaspora and racialization – then we wouldn’t be able to do this.”

Both Zamurd-Butt and Mariwany made it clear the community of writers and creators surrounding Bare Lit was vital. Mariwany emphasized that as they draw their panelists from around them, from the networks they are familiar with, it’s important to care for this community, nurturing it and respecting the long-term survival. It’s a deeper relationship than one of ticking boxes. Panelists are brought together not just to create fantastic discussions but also to build the kind of spaces that can strengthen their work, through audience questioning but also feeding back to each other.

Mariwany – “We are constantly being inspired by and looking for inspiration from the communities of writers of colour that we are embedded within. Really, and I said this the first year too, I also kind of wanted to make Bare Lit just for me. It’s important to be able to have a community of writers come together, share their stories, inspire each other, find each other but also meet publishers, readers and collaborate.

It’s important as well for children or younger people who are just starting or maybe haven’t even thought about being a writer to see panels made up of people of color to show that you don’t have to be white, middle class and from a lineage of writers. Hopefully it gives some inspiration.”

Around questions of inclusion and representation, the numbers are stark. In 2015, the UK’s three largest literary festivals featured over 2000 authors. Of those 2000+ authors, only four per cent were from Black Caribbean, Black African, South Asian or East Asian backgrounds, based on a report published by Spread The Word.

Bare Lit provides an inclusivity not just because of who the writers are, but for what they do, giving space for them to breathe and discuss numerous issues surrounding their work, becoming a writer, working with editors, their writing process and more.

Mariwany – “Of those four per cent, many of those writers will be asked to talk about diversity and find it difficult to talk about their own work. We really wanted to make sure not to do that. Although during the first year it was a bit like the elephant in the room and we did have those conversations but we made sure not to get stuck there, to be able to talk about peoples’ craft, works and influences, things they wanted to talk about.

This was mirrored by Olumide Popoola (When We Speak Of Nothing, July 3rd):

Bare Lit festival addresses the lack of adequate representation of BAME writers at literary festivals, not just in relation to numbers but also in terms of what topics one gets to speak about. I value it as a wonderful space to be in conversation with other writers, publishers and editors, to discuss topics of craft, content and creative process.

This year also sees the release of the Bare Lit ’16 Anthology, featuring original, previously unpublished short stories and poetry from participants of the inaugural festival. One of the curators of the anthology, Kavita Bhanot, editor of the short story collection Too Asian, Not Asian Enough said: “This anthology is about creating a space for writers of color to shine on their own terms.”

To many commenters, Bare Lit offers both a simultaneous challenge to the failures of mainstream literary festivals to accommodate the talent of writers of color but at the same time represents an assertiveness about the value of their work and their contributions to the understanding of what we define as ‘literature’.

Yasmin Gunaratnam, author, Sociology lecturer and curator of Media Diversified’s academic space noted:

For me the significance of Bare Lit is two-fold. Most obviously, it’s a breath of fresh air as a literary festival. Festivals have such a poor record of representing the work of writers of color, and they are not great at highlighting working class voices either. But more importantly for me, the festival, and at this post-Brexit moment, offers an alternative to parochialism. It provides glimpses into different worlds and expands our horizons. And of course, it’s about great writing.

The expansion of these horizons comes from numerous initiatives not just in the past few years but stretching further back, that have sought to push against and re-imagine the nature of the UK’s literary landscape.

As author and academic Leone Ross (Come Let Us Sing Anyway, June 5th) emphasized:

Black people have always had to create their own vibrant literary spaces in this country. In the 60s, New Beacon Books blazed a trail for black publishing in the UK, along with Bogle L’Ouverture and Karnak House. In the 80s it was the International Book Fair of Radical and Third World Books – still going strong today, by the way! In the 90s, Tony Fairweather packed out venues with everyone from Iyanla Vanzant to Terri McMillan. Bare Lit follows in the steps of that tremendously important BAME tradition – organizing and loving our own art because we couldn’t wait on the mainstream to publish or promote us or to create spaces for our discourse and our growth.

Zhamurd-Butt – “We feel that we are part of a burgeoning and self-supporting ecosystem of creatives that includes the Jhalak Prize, BAME in Publishing, Media Diversified, and many more initiatives where groups and individuals are no longer seeking to appease the mainstream.”

Poet and essayist Momtaza Mehri said:

Creating these spaces is vital in terms of being in community with writers who share your perspectives and priorities, whilst acknowledging and celebrating the differences between us as writers from racialized communities. To share work with both peers and audiences who may be alienated from the literary festival circuit is necessary and is the only way the cultural needle moves forward. We need these interventions as writers/readers/editors if we’re ever to even begin to address the gaps in wider imagination (or even our own).”

Frances Mensah Williams (From Pasta to Pigfoot and From Pasta to Pigfoot: Second Helpings) highlighted:

Sometimes people ask, ‘Why do we need a literary festival that only celebrates writers from ethnic minorities?’ and I’m reminded of questions like ‘Why is there a need for Ebony magazine – can you imagine calling a magazine Ivory and getting away with it? Isn’t that racist?’ Bare Lit is seeking to address the poor showing of non-white writers among the largest literary festivals, giving a voice to writers from diverse backgrounds. I’m honored to have been invited to participate in the 2017 festival and to share my thoughts with writers who must not only craft a great story but navigate other unseen barriers. I’m really excited about attending Bare Lit because once you get past all the superficial skin color stuff, the rest is just fascinating people with diverse viewpoints of the world who can tell a cracking tale.

Gautam Malkani (Londonstani and The Story Distorted) praised the originality of the festival:

The thing that’s most striking is just how fascinating and original the program of events sounds. Because, let’s face it, there’s nothing diverse about the usual pigeonholing writers of color onto panels about race and diversity. The success of the festival speaks to the pent up demand for it. When the mainstream publishing industry says there isn’t a market for these kinds of books or for these writers, they’re basically saying they can’t be bothered to develop a market for it. But their laziness disguised as risk aversion ignores one basic business principle that applies to all industries, whether it’s books, literary events or breakfast cereals, which is that markets are grown at the margins. And this is what Bare Lit has demonstrated.”

Science writer Angela Saini (Inferior: How Science Got Women Wrong and the New Research That’s Rewriting the Story)

It’s important to have festivals like Bare Lit to showcase the work and ideas of writers of color because mainstream media often overlooks all but the big names. We have a voice and important things to say, from a fresh perspective. In my field, science journalism, I’m pleased to say that there are lots of people from diverse backgrounds. But even we can do better to highlight the issues inside science that perpetuate sexual and racial stereotypes. I’m proud to be speaking on Sunday about themes such as these from my new book, Inferior.”

While the fight to diversify media and literature spaces is ongoing, initiatives like Bare Lit are almost fast-tracking that process. Instead of waiting around or ‘begging’ for inclusion, they are building what a literature landscape that valued and was respectful to a diversity of voices would look like, not just for their own sake but because there is value in their stories, experiences and lessons.

The voices, histories and experiences of BAME writers are not to be seen as a separate entity requiring access to the dominant structures, but rather as already fundamentally intertwined.

As Nikesh Shukla’s edited volume The Good Immigrant makes apparent, these histories and experiences are ‘British’ experiences, they are the experiences of people who have had to negotiate the effects and impacts of the structures that many writers and activists seek to struggle against and their testimonies and voices should be given the respect deserved by those who wish to better understand.

Within many of these initiatives and publications you find the truths that are sometimes hidden, sometimes visible about the story that Britain tells itself. They expose its underside – the lack of acceptance, the struggle to belong, to exist as oneself, to be devalued, talked over, disrespected, paid less, to have work and ideas stolen, to have labor unrecognized and undercompensated.

If literature is a medium through which to understand human experiences, to make sense of the world, a vehicle of political expression, then we must acknowledge the debasing effect of a lack of inclusivity.

 


 

Image: Wasi Daniju

Support The Leveller on Patreon

Make a $2 tip to The Leveller for this article.


Marine Le Pen: like father, like daughter by The Leveller

It’s been six years now that France’s barely-closeted fascist party, the Front National (FN), has worked tirelessly to convince us, the French electorate, that they are a respectable political party.

The message has been that the far right’s glory days of throwing migrants into the Seine to their deaths are over, that they have changed, and that they are now respectful and moderate people, with the greatest interest in mind for all of France.

Six whole years of dazzlingly smooth PR, which has brought the FN neatly into the mainstream.  It is, of course, a pack of lies.

Since Marine Le Pen became the leader of her oligarchic party in 2011 (replacing her openly anti-Semitic father), she has done everything in her power to hide every last bit of overt racism from the cameras of the press. Exit the bald thugs in leather jackets and military trousers; enter the clean operators in suits and ties.

The FN discovered the optical power of ‘dapper’ fascism long before the alt-right dared to show itself.  And it’s been working quite well.

But sometime it’s hard to reconcile l’amour de la patrie with a facade of good intentions.  When Le Pen appeared on a right-leaning radio show just two weeks before election day, she was asked if she thinks Jacques Chirac (President, 95-07) was wrong to have admitted France’s responsibility for the Vel d’Hiv Roundup – one of France’s worst mass killings of World War II.

To this, Le Pen kindly and politely replied “oui”. She continued: “France is not responsible. Generally speaking, those who are were the ones in command at the time. France has been bashed for years now, we have taught our kids that they should be ashamed of our historic past.  And I want them to be proud again.”

This is a good moment to rewind just a bit. The Vel’ d’Hiv Roundup is not a historical detail that you can gloss over like dust on one of Louis XIV’s old cupboards.  On June 17th 1942, 7,000 French policeman mass-arrested 13,000 Jews and imprisoned them in Paris’s Vélodrome stadium.

Fewer than a hundred survived. The remaining 12,900 were sent to Auschwitz and other death camps, never to be seen again.

One could point out that it was a demand of the Third Reich, at a time when France was under Nazi occupation. But that doesn’t change the fact that those who committed the crime did so not simply as French citizens but as servants of the French government.  It was the state of France, with its police force and its administrative machinery, that opted for institutional complicity.

In 1995, President Jaques Chirac officially recognized this culpability for the first time, permanently casting light on this dark stain on our history – as many other nations already had done for their own crimes.  It urged us to grow and learn as a nation without denying our past and thus repeating the same mistakes in the future. But it appears to be too much for our wannabe Joan of Arc to swallow.

Such denial of responsibility is quite incredible, really; such spiteful dissonance in the face of our recorded history, such fragile pride in masking the fact that, maybe, we are not perfect in every way.  But it’s not surprising.

In the same way that Le Pen stubbornly denies that she illegally took money from the European Parliament, she also denies that we, as a nation, have responsibility for our actions. She cannot accept that this duty won’t go away just because one thinks that France has been “bashed”.

Why, after so many years of trying to mask the Nazi stench of her party, is she making a mistake like this? Simple: it is no mistake.

From the beginning of this years-long re-branding, Marine Le Pen has consistently positioned herself and her movement as anti-establishment (sound familiar?) – even as she uses the machinery of that same establishment to hide from the cops with parliamentary immunity.  She poses as the victim of a decadent society that stigmatizes her for being true to the old French values, incorruptible, without an ounce of irony.

Of course Le Pen’s words have been met with widespread condemnation across the French media, and of course the general public is offended…but they are not her target audience, their outrage is not her concern. She built her base by touching France’s very own deplorables, the ones that think that they cannot express their complaints out loud, the ones that feel duped by so-called politically correct society.

With this sort of declaration, she is speaking directly to a fragile mob that seethes with resentment at every reminder that we are not above reproach.  They are fed up with the fact that they cannot shout out loud their French pride unchallenged, mixing racism and negationism in the process.

If you have the stomach for just a brief glance at the comments section of one of the many op eds condemning her hate speech, you will see legions of Internet trolls shouting their unshakable support of their candidate.  You’ll observe one of the great flame wars of our time.

Politicians from other parties closed ranks against Le Pen’s position, such as Christian Estrosi from the Republicains (the main center-right party).  Estrosi said that “by denying the french state responsibility, she is following her father on the path of indignity”, a position broadly echoed by left wing candidates Benoit Hamon (Parti Socialiste) and Jean Luc Mélenchon (la France Insoumise).

All the while Le Pen is quietly smiling, her campaign having been built on the assertion that those guys are all the same, liars and slaves of the system.  With voting now just days away, the FN are still polling exceptionally well and are tipped to be the front runners in the first round, putting them dangerously close to winning the presidency.  Le Pen’s calculation, that antisemitism is no longer a vote-killing taboo, appears to be paying off.

This allows her to shore up her base with relative impunity, reassuring those who still believed in her daddy, a man who qualified the gas chamber as “a historic detail”.  Because we wouldn’t want to lose those guys on election day, would we?

Is there any need of further proof that the Front National is still the same old racist party? No, not really. But it has now become clear that when it comes to watching fascists rise to power, France doesn’t care.

 


 

Image: Blandine Le Cain

Support The Leveller on Patreon

Make a $2 tip to The Leveller for this article.


GDS Retrospective #2: tools for making & communities by Richard Pope

Tools that help teams make things faster and tools help teams talk to each other better are very powerful levers when it comes to digital transformation.

That networks eventually emerged across government in the form of Slack channels and deparmental git repositories is great, and I am genuinely excited everytime I see an update to the GOV.UK prototyping tools and design patterns that edge them towards becoming a more solid, generic tool-set*.

But, collectively, we did not realise that this stuff was important early enough at GDS.

The Government Service Manual project could have gone in that direction - a place to share and view the progress of projects across government.

And it nearly did (I think the prototype is still on Github somewhere). It could have become a place to link to the source code on GitHub, to list team members and resources of a project, and to share progress against the Digital by Default Service Standard throughout a project. A bit like Launchpad does for various FOSS projects. In doing so, it could have set the expectation that projects should be open and that they should share their work, as well as set the expectation of the sort of infrastructure that needed to exist to support teams like email groups, git repositories, blogs, IRC (this was pre-slack IIRC).

That project was probably never the right place to do that work, but we should have done it in some form, somewhere. We could have provided more tools for communities across government earlier, and we could have built more tools to make it easier to make things.

It's hard to say why it didn't happen. Partly, we just didn't collectively realise it was a priority. Partly, these things are quite boring, quite geeky and hard to explain to people who've not had to rely on such tools before. Partly, they cross disciplines/professions, which makes the ownership problem harder and the opportunity harder to see. Partly, there was just lots going on!

Anyway, my main reflection on this is: start investing in your tooling early.


* The accessibility thinking baked into the design patterns is so good it has the potential to have a big impact if non-government developers started regularly using it in place of Bootstrap or Foundation.


GDS Retrospective #1: knowing when to run by Richard Pope

This is part of a series of blog posts about reflections on my time at GDS. See background and caveats.


A solid approach to building digital services and to digital transformation emerged from the work of GDS - a synthesis of established processes of user-centered design, civil service processes and the collective wisdom (and, inevitably, biases) of lots of people*.

It is broadly set out in the Government Service Design Manual and can be characterised as 1) assume you know nothing 2) setup a multi-disciplinary team of 5 - 15 people 3) then do some combination of Discovery, Alpha, Beta, Live over a period of 4 - 12 months.

This is a good thing.

The UK government now has an approach to building digital services that is better suited to the world of shortened development cycles that a combination of open-source, commoditised platforms and integration testing have brought us.

Reflecting on my time at GDS, I think there is something missing from that approach, as it is understood.

Not everything that followed that process succeeded, and not everything that succeeded followed that process.

From the original alphagov project to the reset of the Universal Credit project, there are examples of where just making something at speed and setting as much direction as possible worked as a strategy to create the space and permission to do more work, and to change the way people view a problem.

When it comes to digital transformation, I think we are missing a mature approach to knowing when to run fast and when to follow a more structured process.

We don't have the words** and as a result, it gets used haphazardly as an approach and a risk of accusations of cutting corners

I don't know exactly what form that narrative should take, but I think it could include some of the following:

  • Generating ideas and prototypes early on is OK, so long as the ideas are loosely held. (Projects can fail through a lack of good ideas and speed, just as much as they can by a lack of top-cover.)
  • That moving fast is a strategic tool, with a different set of outcomes.
  • Designers should design in a way that embraces high-level concepts but allows for details to be filled in later (more on this coming in a future blog post).
  • An acceptance that, sometimes, we are not starting from scratch and the only thing to do is get something done. (By the time Alphagov started, the flaws and opportunities of Directgov had been debated and prototyped again and again at GovCamp, at RewiredState hack days and in the community surrounding mySociety.)

As the development cycle gets even shorter (see Google App Maker and the as a hint of what the future might bring), knowing when to run is only going to get more pressing.

* The initial work on the Service Manual was in part about removing a general sense of unease about duplication, and about what technical best-practice should look like, as the GDS got bigger. There had been a few sporadic blog-posts, but the process of writing the manual distilled lots of things for the first time.


** Because we didn't write about it?


Eyes on Tim’s flight by Goatchurch

I’m not going to learn too much from myself, so I’ve got the raw data from Tim’s flight as measured by his Kobo/bluefly IGC logger.

timgps

As you can see, he ranged a good deal further than I did from Mam Tor, reaching an altitude of 1500m, and he reported scoring glide angles of 10 to 1 on his glider much-fancier-than-mine-of-which-I-am-now-jealous.

Let’s see if we can corroborate this from his data with the power of pandas and matplotlib.

pIGCtim = fd.LoadIGC("Tim-1492470000.igc")
pQ = pIGCtim.resample("1S").interpolate(method="time")  # fill uneven records

dstep = 30 # rolling windowed measurements in seconds
vario = (pQ.alt.diff(dstep)/dstep)  # m/s averaged over 30seconds
groundvelocity = numpy.sqrt(pQ.x.diff(dstep)**2 + pQ.y.diff(dstep)**2)/dstep

Here is vario:
timsvario

Here is ground velocity:
timsgroundvel

The glide slope (normally called the glide-angle, even though we don’t quote it as an angle) is the groundvelocity divided by the descent rate. When the descent rate is close to zero, you get stupid numbers, so we need to throw those out and only quote for the parts where there is reasonably direct gliding flight.

glideslope = -groundvelocity/vario

# pandas row-wise subselection technology:
gs = glideslope[(glideslope>0) & (glideslope20) & (groundvelocity>8)]

plt.scatter(gs.index, gs, label="glide slope")

timsglideslope

It does appear that around 12:51 glide angles of around 10 to 1 were achieved for a duration. The kobo flight computer doesn’t filter or process the spot measurements of glide angle, so this is something a pilot must sense by glancing the device repeatedly and recognizing that a similar number has occurred multiple times recently.

From the high point, a very long descent occurred, first to the east, and then to the west in what was a light noreasterly wind.

t0, t1 = pandas.Timestamp("2017-04-18T12:48"), pandas.Timestamp("2017-04-18T12:56")
tX = pQ.x.idxmax()  # furthest east extent
tX1 = tX - pandas.Timedelta(seconds=10)  # leave some space at the corner
tX0 = tX + pandas.Timedelta(seconds=40)
plt.plot(pQ[t0:t1].x, pQ[t0:t1].y)
plt.plot(pQ[t0:tX1].x, pQ[t0:tX1].y)
plt.plot(pQ[tX0:t1].x, pQ[tX0:t1].y)

timglidedown

Now, I am sure that Tim would agree that this type of uninterrupted glide down from 1600m to 600m in some 9 minutes is not exactly the kind of event we are looking for; we want to catch another thermal well before we get low, to go back up, and fly further on without having to turn tail at the 1000m mark like a coward and hope to have enough metres spare to retreat to the home ridge.

No, on this day folks were doing their fifth hundred mile XC flight this week — on a f***ing paraglider. What is wrong with us? According to them: “all you have to do is speed up in sink and slow down in lift.”

Yeah, and playing the piano is just a matter of hitting the keys in the right order.

The answer must be somewhere in these numbers, coz we ain’t getting any help from these born-with-talent flying buggers.

As we have 9 minutes of clear descent here, first going east (outwards), and then going west (home), what is the calculated 30 second rolling interval sink rate with clean matplotlib labels on the timeline:

fig = plt.figure(figsize=(8, 5))
ax1 = fig.add_subplot(111)
ax1.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
plt.plot(vario[t0:tX1], label="sink-east")
plt.plot(vario[tX0:t1], label="sink-west")
plt.ylabel("m/s")
plt.legend()

timglidedownvario

And the ground velocity there and back looks like:

timglidedownspeed

That’s 11.8m/s out, and 17.6m/s back suggesting a prevailing wind of 2.9m/s.

Well, not quite. This does not corroborate with the drift of the thermal immediately prior, that had a drift of 5.3m/s according to my calculations:

t0, t1 = pandas.Timestamp("2017-04-18T12:40:00"), pandas.Timestamp("2017-04-18T12:44:35")
plt.plot(pQ[t0:t1].x, pQ[t0:t1].y)
a, b = pQ.loc[t0], pQ.loc[t1]
math.sqrt((b.x-a.x)**2 + (b.y-a.y)**2)/(b._name - a._name).seconds

timsthermalclimb

The heading, when in that thermal, was 212 degrees, while the out leg was 075degrees and the return leg was 263degrees, which should give a mean orientation difference of
5.25*(-cos(246-75) + cos(212-263)) = 8.5, which is still some way off from the out and return ground speed difference of 5.8m/s.

So much for that idea.

And now for something really useless, here’s the plot of the polar curve (airspeed vs sinkrate) based on the two components of that glidedown:

plt.scatter(groundvelocity[t0:tX1]+3.0, vario[t0:tX1], color="b")
plt.scatter(groundvelocity[tX0:t1]-3.0, vario[tX0:t1], color="r")

timspolar

Not at all curvy.

Who knows what’s going on, with the wrong filtering, uncalibrated up and down air currents or whatever. He didn’t have an airspeed sensor like I did, and I didn’t do any long glides — and anyway the airspeed sensor is probably bogus anyway as it’s on the base-bar and therefore changes angle when I pull the bar in.

So, the gliding polar remains elusive.

But what about my own data?

Well, my visiting aunt with her good camera took this nifty photo of me while I was in a high banking turn in a good thermal after narrowly averting a disappointing trip straight down to the bottom landing field, which would have ruined my day and made me all upset and sorry for myself:

banking

The basebar spans 18 pixels wide and 27 pixels high suggesting a bank angle of 56 degrees.

This can’t be so, as the BNO055 orientation sensor never got higher than 38degrees in the roll direction.

In fact, using the EXIF image properties timestamp against the photo of my takeoff gives a 619second offset where my altitude was 500m (level with takeoff) on a heading of 050 with a bank angle of no more than 20degrees while the bearing to the cement works (the smoke in bottom right hand corner) is about 110.

A horizontal stick that is lifted up by its left end by 20degrees and then spun towards you about the point on the ground by 60 degrees will appear to have an angle against the horizon of atan(sin(20)/(cos(20)*cos(60))) or 36degrees, which is quite a lot different to 56degrees. And I don’t think I can fudge these numbers.

So that’s not working either.

Humph.

Maybe the calibration is bad.

I blame this f***ing general election that’s just been called.

Bastards!

Why does everything else in the world seem to proceed half-decently, from ship-building, to finding the cure for cancer, with a high probability that suitably intellectually able people will be employed to engage in the endeavor to perform as well as any human could be expected to do, yet in the realm of high-level public administration and the maintenance of civilization, we are served by an endless parade of self-serving ignorant amateurs and con-artists. Why can’t these morons find something better to do with their lives than meddling in our affairs? Like paint pictures of go hang-gliding.

UPDATE: Turns out there was an error in my orientation sensor computations.

Here is the correct set of numbers at close to the moment that picture was taken

time ax ay az gx gy gz roll pitch heading
2017-04-18 11:59:00.506 4.06 6.20 -12.11 1.76 -7.65 -5.86 -51.328610 10.383540 134.798758

A photographed apparent bank angle of 56 degrees is completely consistent with atan(sin(51)/(cos(51)*cos(134-110)))=53.5degrees. I can’t stress what a rare photo this was, perfectly level with the photographer with a straight horizon to measure against and a known angle.

The other numbers here are sqrt(z.gx**2+z.gy**2+z.gz**2)=9.796=g, which is the right value, and sqrt(z.ax**2+z.ay**2+z.az**2)/9.8=1.44, which means I was pulling 2.4g in that turn.

Who knew I was having so much fun!


Venture Capital is a Capacity Industry with Boom and Bust Cycles by Albert Wenger

A long time ago when I worked as a management consultant, I did a study for Lafarge which has a huge cement business. The study involved real sleuthing as we were trying to determine the operational capacity of cement plants owned by others and under construction all around Europe. At times I felt like a spy as I was hanging out counting trucks going in and out of facilities and sidling up to engineers who worked there as they were getting coffee before work.

Why all the sleuthing? Well, Lafarge understood well that cement was a capacity industry with boom and bust cycles. They wanted to figure out where the cycle was in order to figure out whether or not they should build more capacity themselves. The cycle in cement looks something like this: construction in a region heats up increasing the demand for cement. Since supply is inelastic the price rises. All of a sudden it becomes very profitable to build a new cement plant. Several players rush in and all build new cement plants. This upswing may continue for some time with all the new plants profitable too and even more coming online. Then construction slows down and voila: an overcapacity of cement factories results in a supply overhang which in turn causes a collapse of cement prices. The collapse is dramatic because the construction cost of the plant is sunk cost.

Venture capital shares some of the characteristics of the cement industry. It is a capacity industry also because it takes a long time to raise a fund, but once you have raised a fund, it has capacity to invest for a long time period (often 4-5 years). When there is a wave of innovation it produces good investment returns which results in many new funds being raised. Again this capacity build up will continue as long as returns are good. Existing funds will raise bigger funds, new funds will be formed and corporations spin up their own venture arms. Inevitably of course the innovation wave will run its course and now you are left with an overcapacity of funds to invest. The equivalent of the price of cement collapsing is the collapse of returns.

The funds that perform worst will be the ones that entered most recently or expanded the most *and* are following the existing playbook in the industry. The ones that will do best are the ones that remain disciplined in size and also are looking actively for the next wave of innovation. The most promising candidates in terms of innovation are blockchains and hard sciences (especially in medicine, but also materials, energy and possibly space). For signs of bust look for more investments like Juicero.

I have been looking for global statistics on the expansion of committed capital for the current cycle but have so far only found regional ones and they show a strong expansion of capacity. Since the Internet innovation cycle has been a global one and since investors have been deploying capital with far fewer geographic restrictions than before it would be interesting to see the global amounts added up.


Uncertainty Wednesday: Independence by Albert Wenger

Last week, I provided a recap of what we have covered so far in Uncertainty Wednesday. To go on from here we need to introduce some more concepts that are usually covered earlier but that I believe will make more sense in the framework that we have now established. The first one of these is the concept of independence. Two events are said to be independent if “the occurrence of one does not affect the probability of occurrence of the other.”

Now is a good time to remember that in our simplest model we have four elementary events AH, AL, BH and BL which represent all the possible combinations of the world being in either state A or state B and us receiving either signal H or signal L. We then figured out the probability of events, e.g. P(B), in terms of the underlying probabilities of the elementary events, e.g. P({BH}) and P({BL}. From there we went on to define the concept of a conditional probability, such as 

P(B | H) = P({BH}) / P(H)

So if we look at the definition of independence above, what we are looking for is the situations where observing the signal does not tell us anything about the state of the world. Expressed as a formula, we are looking for a situation where

P(B | H) = P(B)

Now we know that P(H) = P({AH}) + P({BH}) and so what we are looking for is

P(B | H) = P({BH}) / [P({AH}) + P({BH})] = P(B)

Let’s remind ourselves what P({BH}) means. It is the elementary event where the world is in state B *AND* we receive signal H. Another way of writing that is as follows

P({BH}) = P(B ∩ H)

where ∩ denotes intersection. Why? Remember that events such as B and H are sets of elementary events, in particular

B = {BH, BL} and H = {AH, BH}

so B ∩ H = {BH}

With that we can re-write the above condition for independence as follows

P(B | H) = P(B ∩ H) / [P(A ∩ H) + P(B ∩ H)] = P(B)

Now what should P(B ∩ H) be in terms of P(B) and P(H) to make this hold?

Let’s consider P(B ∩ H) = P(B) * P(H)

P(B | H) = P(B) * P(H) / [P(A) * P(H) + P(B) * P(H)] = P(B) / [P(A) + P(B)] = P(B)

The last step here is simply the result of P(A) + P(B) = 1 by definition. The way we set up the problem the world is either in in state A or in state B and the likelihood that the world is in either state is therefore 1.

While this definitely doesn’t pass as a rigorous proof, what we see is that in our 2 state and 2 signal value world the following two are equivalent conditions and each mark independence

Unconditional probability = conditional probability 

and

Probability of any state + signal combination = product of probabilities

More to come on independence next week. For now you should ask yourself based on the above whether assuming that two events are independent is imposing a lot of constraints or few constraints. Or put differently, will we encounter a lot of independent situations or few independent situations in the world?


“No One Really Knows” - No Need for Alarm about How Neural Nets Work by Albert Wenger

There’s been a lot of handwringing recently about how we don’t really understand deep neural networks. The MIT Technology Review even published an article with the sensationalist headline “The Dark Secret at the Heart of AI.” It head the subheading “No one really knows how the most advanced algorithms do what they do. That could be a problem.”

Well sure it could be a problem, but let’s get one important point out of the way: no one really knows how people do what they do, yet we have them do all sorts of things every day. People drive cars. People diagnose diseases. Does it matter that we don’t know how they do it? 

And no one really knows how existing large scale software systems work either. By that I don’t mean that we don’t understand small components of these systems. I mean no one can grasp the whole in one go and understand how it works. We can follow pieces at a time but their interaction is extremely complex. How does Google work when you run a search query?

I am not trying to dismiss the “black box” nature of neural networks, but I think instead of being alarmist we need to think about what it is we are actually worried about and what to do about it.

It all comes down to understanding failure modes and guarding against them.

For instance, human doctors make wrong diagnoses. One way we guard against that is by getting a second opinion. Turns out we have used the same technique in complex software systems. Get multiple systems to compute something and act only if their outputs agree. This approach is immediately and easily applicable to neural networks.

Other failure modes include hidden biases and malicious attacks (manipulation). Again these are no different than for humans and for existing software systems. And we have developed mechanisms for avoiding and/or detecting these issues, such as statistical analysis across systems.

There is always more work to be done to improve how we can detect and avoid failure in systems. So let’s do this work for neural networks and we will be able to use them effectively. As an important aside: I am pretty sure that our work with neural networks will result in better understanding down the line of how they work *and* of how humans work.


This weekend, it's the beginning of the end for Cassini by The Planetary Society

NASA's long-lived Cassini spacecraft is about to buzz Titan for the final time, putting it on course for a spectacular mission finale that concludes in September.


Spring 2017 issue of The Planetary Report now available by The Planetary Society

The Spring 2017 issue of The Planetary Report is in the mail and available online now to our members!


Our asteroid hunters are trying to save the world. Here’s what they’ve been up to by The Planetary Society

Here are some recent reports from our NEO Shoemaker Grant program asteroid observers, who are quite literally trying to save the world.


April 21, 2017

The book so far by Simon Wardley

Writing is not something I find comfortable. Lately I've been consumed with putting together a book on mapping. It's getting there. Slowly.

Chapter 1 — On being lost
An introduction into the concept of situational awareness and the strategy cycle.

Chapter 2 — Finding a path
How I built my first map of business.

Chapter 3 — Exploring the map
The shock of discovering common economic patterns.

Chapter 4 — Doctrine
The even bigger shock of discovering that some patterns are universal whilst others are context specific.

Chapter 5 — The play and a decision to act
Using maps to make a choice including some basic strategic play such as ecosystem models.

Chapter 6 — Getting started yourself
How to start mapping and some useful books to read.

Chapter 7Finding a new purpose
Using mapping on itself and discovering a new purpose. The underlying work on evolution.

Chapter 8Keeping the wolves at bay
The dangers of simplicity and the concept of flow.

Chapter 9 —Charting the future
The use of weak signals and economic cycles.

Chapter 10 — I wasn’t expecting that!
The evolution of organisations and different forms of disruption

Chapter 11 — A smorgasbord of the slightly useful
A collection of useful mapping topics that by pure coincidence might be needed for the following scenario.

Chapter 12 — The scenario
Something for you to get your teeth into.

Chapter 13 —Something wicked this way comes
Analysis of that something from chapter 12.

Chapter 14 —To thine own self be true
The path you probably should have taken in chapter 12.

Chapter 15 — On the practice of scenario planning
On scenario planning and the concept of roles. Diving a little more deeply into financial modelling.

Chapter 16 — Super Looper
A walk through of severals loops of the strategy cycle.

I've around 4-5 more chapters to go and then this first pass, this introduction into mapping, should finally be finished. Except of course for the rewriting, editing, rejigging, frustration, burying in soft peat for 6 months in triplicate, tearing up, cursing etc etc.


April 20, 2017

Round, round, get around, I loop around by Simon Wardley

Chapter 16


(draft)

The LFP example is based upon a real-world event. I say “based” because I usually take time to disguise the actual event to protect any guilty parties. In this case, the haphazard and stumbling CEO was ... me. The variations to the real world include it was I, not the client, that proposed this concept of worth based development and I put in the effort to build those numerous financial models. True to form however is I also fought plenty of internal battles over inertia to make this project happen. In this case, I’m going to use that LFP scenario to examine mapping in practice. I’m very wary that my long experience with mapping means that I tend to gloss over parts through assumption. In much the same way, I spent six years assuming everyone already knew how to map and it wasn’t until 2011 that I started to realise they didn’t. With that in mind, I’m going to go into excessive detail in the hope that I don’t miss anything useful to you. To keep it relevant and not just a history lesson, I’m going to go through the steps of how you would tackle the LFP scenario as if it was happening today.  
To begin, I always start with the strategy cycle. To me, it doesn’t matter whether I’m looking at nation states, industry, corporates, systems or even individuals – the strategy cycle applies. For completeness, I have provided this cycle in figure 198.

Figure 198 – the strategy cycle





Our initial purpose for this LFP system is to help create leads for our client. That is what they need and it is also how we will be measured. We don’t have to agree to the proposal but if we choose to accept it then our focus must start here. Of course, we have our own needs – to survive, to make a profit, to have fun - which we could choose to map. In this case, I’ll elect not to.

We know we also have a “why of movement” question in the scenario – do we build the entire system in-house or do we use elements of a public platform? Do we go here or there? Why? Before we can answer this, we need to understand the landscape a bit more. Fortunately, in the LFP scenario a map has been kindly provided by engineering along with the more common financial models. As tempting as it is to dive straight into the financials, start with the landscape. I do love a good spreadsheet, I’ve spent years of my life immersed in cashflows, GAAP, chart of accounts, options analysis, business models and all manner of delightful things. However, a word to the wise, put these to the back of your mind for the moment. The financials can often be skewed by a bias to the present. 

With the map provided, one immediate thing I’m going to note is that we have inertia against using the public platform space via both security and the systems group. I’m going to mark that onto the map in figure 199.

Figure 199 – adding inertia.


Now let us focus on that platform change, the shift from product to a more industrialised form which in this case means utility. As noted many times before we have a common economic pattern of co-evolution i.e. as an act evolves we often see a corresponding co-evolution of practice. Let us concentrate here, remove all the other bits of the map and add in co-evolution. I’ve done this in figure 200

Figure 200 – co-evolution


By applying that basic pattern to our map, we can anticipate that as the world shifts towards more utility like code execution platforms, some new-fangled practice (a sort of DevOps 2.0) will emerge. We don’t know what those practices will be as they emerge in the uncharted space. We don’t know when precisely this will occur. But we know that we will have inertia to this change. We also know that such changes tend to be rapid (another common economic pattern known as the punctuated equilibrium). We can also go a bit further. 

The nodes on the maps are stocks of capital with the lines representing flows of capital between them. With evolution from product to a more industrialised form then we normally expect to see flows of capital away from the past industry into more industrialised providers and / or new higher order systems and / or new practices. I’ve marked on these flows on capital and were to invest and what will become legacy onto figure 201.

Figure 201 – flows of capital


Capital flows to the more industrialised components along with the new higher order systems that these enable - collectively we can this the new industry. There will also be new practices (i.e. co-evolved) that will replace those past practices. The new higher order systems will themselves enable new needs (technically, they expand the adjacent possible, the realm of new things we can do) which means news customers. The past ways stuck behind inertia barriers, increasingly devoid of capital will die off. If this sounds familiar, then it should. This is what Joseph Schumpeter termed “Creative Destruction”. The question is when will this happen. For that I should turn to weak signals and examine those four conditions – does the concept of utility platform exist, is the technology there, is it suitable and do we have the right attitude?  See figure 202.

Figure 201 – do the factors exist?


In this case, someone is providing such a platform hence the concept and technology exist. We have services like AWS Lambda. In the scenario, there’s obviously some sort of dissatisfaction with the current models otherwise the client wouldn’t be looking for a new way of doing things. The attitude seems to be there, maybe this platform space will help? But is it really suitable? I tend to use weak signals to help determine that but you can also use the cheat sheet. When you examine an activity, it often has characteristics from more than one stage of evolution e.g. it might be strongly product and a little bit commodity or vice versa. You can use this to help you refine your understanding of where something is. In this case, I’m looking for product characteristics with the emergence of commodity.

I’ve published a more advanced cheat sheet in figure 203, with each stage (I to IV), the terms used for different types of components (activities, practices, data and knowledge) plus the general characteristics. 

Figure 203 – The Cheat Sheet


So, let us examine the platform space today in 2017. What we’re focused on is a code execution environment which in the product world is normally described as some form of stack (e.g. LAMP or .NET) or in the utility space where we have the emergence of systems such as Lambda. It’s importance to focus on the “code execution environment” as unfortunately platform is one of those hand wavy terms which gets used to mean any old tripe – see also ecosystem, innovation, disruption and almost anything in management that is popular. Don’t get me started on this one as I’m not a fan of the field I work in. I’m sure along with strategy consultants talking about “earlobes for leadership” (HBR, Nov, 2011) then it wouldn’t take me long to find a bunch of them talking about how a “cup of tea is a innovative platform” such is the gibberish which has invaded management.

From the cheat sheet, comparing stage III (product) and IV (commodity), then: -

Ubiquity? Is the platform space rapidly increasing OR widespread in the applicable market? I think it’s fair to say that this is very widespread. It’s not a case that you normally have to suggest to a developer that they consider using a platform to build something, they often have their favourite stack whether it’s LAMP or something else. We can give a tick for commodity here. 1/1

Certainty? Are we seeing a rapid increase in use (i.e. rapid diffusion in all companies) with platforms that are increasingly fit for purpose OR are they already commonly understood, just an expected norm? I think we can say most developers would be surprised to walk into a company that was excited about its platform roll-out. They’d expect some sort of platform to exit. Strike two for commodity. 2/2

Publication types? Are trade journals dominated by articles covering maintenance, operations, installation and comparison between competing forms of platforms with feature analysis e.g. merits of one model over another? OR are trade journals mainly focused on use, with platforms becoming increasingly an accepted almost invisible thing. We, if we go back to 2004 then journals were dominated by this platform or that platform – LAMP vs .NET and the best way to install. Today, this is much less and most of the discussion is about use. Strike three for commodity. 3/3

Market? When we examine the platform market are we talking about a growing market with consolidation to a few competing but more accepted norms? OR are we talking about a mature, stabilised market with an accepted form? Well, the platform market seems mature and stable with an accepted form – .NET, Java, NodeJS, LAMP etc. Commodity wins. 4/4

Knowledge management? Are we mainly learning about how to operate a platform, starting to develop and verify metrics for performance OR is this field established, well known, understood and defined? In this case, platform probably wobbled on the side of product rather than commodity. Hence, product wins and it’s now 4/5 for commodity.

Market Perception? Do we have increasing expectation of use of some form of platform and the field is considered to be a domain of “professionals” OR are platforms now considered trivial, almost linear in operation and a formula to be applied? Again, with this though we’re getting there, product still wins and hence it’s now 4/6.

User perception? When it comes to platforms are they increasingly common, a developer would be disappointed if it was not used or available, there is a sense of feeling left behind if your company is not using it OR are they standard, expected and there would be a feeling of shock if you went to a company that didn’t use some form of standard platform (whether .Net, LAMP or other). I think I can probably say that commodity wins this one, it would be shocking to find a company that didn’t use some form of platform approach and it’s that “shock” which tells you it’s in the commodity space. 5/7.

Perception in Industry? Advantage in platform is now mainly seen through implementation and features (i.e. this platform is better than that platform) rather than an actual difference in creates OR platform is now considered a “cost of doing business”, it’s accepted and there are specific defined models. It would be difficult to imagine a software house today that didn’t view a platform as a “cost of doing business”, so whilst there’s some wobble, I’d argue that commodity edges this. 6/8

Focus of value? Are platforms considered to be areas of high profitability per unit and a valuable model? Do we feel that we understand platforms and vendors are focused on exploiting them? OR are platforms more in the high-volume space, mass produced with reducing margin. Are platforms important but increasingly invisible and an essential component of something more complex? In this case, especially with provision of utility like services then commodity wins again. 7/9

Understanding? In the platform space are we focused on increasing our education of them with a rapidly growing range of books and training combined with constant refinement of needs and measures? OR do we believe platforms and the concepts around them to be well defined, almost stable and with established metrics. This is a tough one, I steer to the side of commodity but can easily see a case for it being still in product. However, I’m going to give this to commodity 8/10.

Comparison? Do we have competing models for platforms with feature difference? Are authors publishing some form of evidence based support for comparison i.e. why this platform is better than that because of this feature and why you should use them over not use them? OR are platforms just considered essential, an accepted norm and any advantage is discussed in terms of operations – this is cheaper or faster than that? This is a tough one but in this case, I’d edge towards product. We’re not quite at the pure operational comparison. Product wins. 8/11

Failure modes? When it comes to a platform is failure not tolerated? By this, I don’t mean there is no failure - a distributed environment based upon principles of design for failure copes with this all the time. But do we have an expectation that the entire platform system won’t fail? Are we focused on constant improvement, we assume that the use of such a platform is the right model and there exists some resistance to changing it? OR have we gone beyond this, are we now genuinely surprised if the platform itself fails? Is our focus on operational efficiency and not stopping the fires? Whilst there will be many companies with the home-grown platform effort and inevitable out of control fires, as an industry we’ve moved into the commodity space. 9/12

Market action? Is the platform space entrenched in market analysis and listening to customers? What kind of blue do you want that fire to be? OR has it become more metric driven and building what is needed? Commodity wins here, just. 10/13

Efficiency? When it comes to platforms are we focused on reducing the cost of waste and learning what a platform is OR are we focused on mass production, volume operations and elimination of deviation. Again, especially since utility services such as Amazon Lambda now exist then I’d argue commodity edges this. 11 to commodity out of 14 – 11/14.

Decision Drivers? When making a choice over what platform to use, do we undertake a significant analysis and synthesis stage, gathering information from vendors and analysts on its suitability OR do we just pick the platform based upon previous experience? Tough one, but again I view that commodity just edges this in the market overall though some companies love their requests for tender. 12/15

Overall, we can safely say that the platform space (as in code execution) is firmly in stage IV (commodity + utility) in 2017.  It’s also fair to say that platform isn’t quite yet the industrialised commodity that electricity is but it’s jumped from one stage (product) to the next. There’s a bit further to go. Hence, what do I know from my map and the basic patterns so far? Platform is moving into commodity (stage IV) with provision of utility services. This will happen rapidly (a punctuated equilibrium) with such a shift (known as the “war”) normally taking 10-15 years. There will be a co-evolution of practice associated with. Many companies will have inertia. Capital will flow into the more industrialised platform space and those higher order systems built upon it – there is going to be lots of future opportunity here. Capital will also flow out of those spaces stuck behind inertia barriers, not exactly where you want to be. Or is it?

At this point, we need to think about our purpose. My goals as a “retiring” CEO might be very different from the “upstart warrior” CEO. Let us assume I’m more Queen Boudica than Victor Meldrew and I want to fight for a bold future for my “people” rather than exploit and surrender to the past. My cultural heritage is more inclined to investing in the new space rather than just exploiting the legacy. In 2017, I’m not yet in a position where I’m forced to exploit the legacy as the change is only just starting in earnest. I’m a little late but not that late.

But, hang on, aren’t I deciding here? I haven’t gone through doctrine yet and I’m already talking about how to play the game. The strategy cycle is a cycle which you will loop around many times in coming to your decision. Each time you loop around, new information and biases form that will change your purpose, your view of the landscape and ultimately your choice. This is all normal. It’s not a rigid linear path. It’s a guide. At this point, let us peek at those financial models.

Getting messy with numbers
The first thing to note is that numbers are not reality. Just because it’s written in a spreadsheet doesn’t mean it is going to happen any more than a Gantt chart tells you what the future really holds. In this case, the CFO has had the good sense to provide a range of outcomes for two variants (the build in-house, the use a public platform) and then complain about the lack of probability provided. I like this CFO.

Let us assume that after some badgering we have managed to tease out some probability figures for the outcomes from marketing and sales. I’ll explain a little more on how to do this later. In figure 204, I’ve added probability onto the financial models for each of the variants – variant 1 (build in-house) and variant 2 (use the public platform play). Let us go through the terms.

Probability: the likelihood of this outcome occurring according to sales and marketing.

Total investment: the total amount of capital we’re putting into this effort.

Total return: the amount of capital being returned (after repayment of investment). This is the annual net cash flow including any disposals.

Opportunity loss: the return I would have expected had I spent the capital on other projects. In the LFP scenario our standard return on investment (ROI) is 40%

Net Benefit / Loss: How did this investment do compare to my standard expected return? i.e. total return – opportunity loss.

Expected return: the net benefit / loss * the probability of this occurring.

Figure 204 – Options analysis

Given this probability profile then the best expected return comes from variant 1 i.e. building in-house. But wait, didn’t we say this building in-house was the future legacy? Well, as I did point out, most financial models have a bias to the present and hence they discount the future. The problem is that by following this path we’re are building up the legacy practice (and related inertia) and not positioning ourselves to build a future market. Can we somehow financially account for inertia and future position? Yes. The essential question between variant 1 and variant 2 is the following – are we prepared to gamble $435k of expected return to explore and potentially secure a more lucrative but undefined future? To analyse this is very complex. So, what do we do? Well, I will build monstrous complexities for navigation but you can SWOT it. 

SWOT? But isn’t SWOT the curse of simplistic management? Yes, but it also has its uses particularly if we understand the landscape. The problem with SWOT isn’t that it is useless but instead we apply it to landscapes we don’t understand.

We have two variants – build in-house (1) and public platform (2). The strength of build in-house is we’re familiar with this approach within our teams and it provides the greater expected return. Its weakness is we build up our legacy position which comes with the threat of increased inertia and future inability to change. On the other hand, using a public platform play (2) has different characteristics. Its strength is we build up experience in the future space and though it has a less expected return it provides an opportunity to develop skills and explore new opportunity. The weakness is we’re unfamiliar with this and the threat is that it fails we lose face with the customer but also potentially political capital with the board. The path you decide really depends upon you. The “retiring CEO” will plummet for variant 1, the “warrior CEO” will go for variant 2. 

At this point questions such as “But what if those probabilities are wrong?” and “What if the options I’m looking at aren’t right?” should be racing through your mind. So, let us tackle that bit.

Getting probability probably nearly right-ish.
As with most things in life, there exists huge amounts of uncertainty over which outcome will occur only exceeded by a willingness of people to tell you that they would have chosen a different outcome if in fact you pick the wrong one. Fortunately, you can exploit this. First up is to use the Marquis De Condorcet’s work and get everyone familiar with the business to assign probabilities and take the average of the lot. A more refined version is to use an information market.

Information markets are simple concepts but fiendishly difficult in practice because of unintended consequences. A basic example of one is as follows. Let us assume we want to know from the company whether a project X will fail to deliver or succeed? We create a bond (called project X) which will pay a certain return (e.g. $200) if the project is successful at a specified date but will return $0 if it is not. We give everyone in the company one bond and $200 as a bonus. We then let them trade the bond in our own internal market.

Along with the nice “thank you” for a $200 gift (which has its own secondary benefits), the bond itself maybe worth upto $200 or might be nothing at all. So, people will tend to trade it with others. If I expect the bond is 90% likely to fail then I’ll be over the moon to sell it to someone else for $40 and a bit gutted if it succeeds. The price on the internal market will reflect the likelihood or not of the bond i.e. the question asked. The use of such information markets is well over a decade old but there can be lots of political complications in practice particularly if you get an individual starting to make a small fortune on this. There’s nothing wrong with that, they’re somehow providing you accurate information on the future but it can cause difficulties.

I mention this more to point out that there are lots of ways of skinning Schrodinger’s cat and finding probability. The question is always how much that information is worth to you? The cheapest way is to guess yourself, the slightly more expense is to aggregate other people’s guesses and the far more expensive (but also far more accurate) tends to be the use of an information market. But let us assume our probabilities are “right”. This doesn’t mean one outcome will happen, it’s just a probability. We still must roll the dice. However, what we know so far is that we have this opportunity to build an LFP system, there are two variants (one in-house, one using a platform play) and whilst the in-house variant gives a greater expected short term return, the platform play prepares us for the future and the co-evolution of practice that will happen. Let us get back to our strategy loop and start looking at doctrine especially the topic of “managing inertia”.

Managing inertia
We have the map, we can anticipate certain change and we can already see there is inertia. The question now becomes, what sort of inertia do we have? Back in 2008, I use to categorise inertia into four basic types with numerous subtypes. I’ve tidied this up since then. The basic forms of inertia are provided in figure 205 including tactics to counter and counter points. 

Figure 205 – inertia


All forms of inertia relate to some loss of capital whether physical, social, financial or political. We know that two groups (security and systems) are exhibiting inertia, those are usually however not the problem as we’re aware of it and hence it can be managed. The danger is always the group that haven’t quite made themselves clear.

In the case of security, the inertia is probably related to two types. First, we have uncertainty over the use of a platform play and any co-evolved practices that might emerge. This will require “Investment in knowledge capital”. We can overcome this with either training or providing time and resources to develop the skills necessary. We can certainly provide an argument that if we fail to do this then the future cost of acquiring these skills will be higher and we will also miss out on shorter-term motivation for staff. The second type of inertia is “Changes to governance, management and practices”. Co-evolution is always difficult for people to get to grips with as it means that existing and perfectly valid best practice (for a product world) becomes no longer relevant. We can only overcome this by explaining co-evolution usually by pointing to past examples. Both types of inertia are relatively simple to manage.

Slightly trickier is the Systems groups. Along with the two types of inertia mentioned above, we’re likely to have two additional types especially since the group builds and controls the underlying infrastructure behind any home-grown platform efforts. These are “Loss of political capital” and “Change of business relationship (loss of social capital)”

The “Loss of political capital” includes fear over being relevant in the future, loss of status and loss of past empire. Don’t underestimate or dismiss this as it’s very uncomfortable for those who face it. You counter by giving people are path to the future and relevance in it. Start by acknowledging what has been achieved and move onto modernisation. You need to emphasise the importance of future agility, efficiency, importance to the business and how we must build for the future. You also must include them in it. At this stage, such action is relatively trivial. The practices haven’t been developed and so there’s plenty of time for training, reskilling and the recreation of essential system concepts in a more utility platform world from configuration to management to operation to monitoring. It’ll be different from the past but someone should develop that capability, no-one yet has those skills and why shouldn’t it be your Systems team? Unfortunately, what often happens is companies don’t anticipate obvious changes and leave it late. This creates an added complication which I’ll discuss in a moment.

The “Change of business relationship (loss of social capital)” is the second additional type of inertia you must contend with. There’s often a pre-existing relationship with vendors who might be supplying products or services. In normal circumstances, you can deal with this through normal vendor management approaches. You can emphasise that the time is right for a change, that the past has evolved and we need to re-evaluate the vendor’s offering. However, there’s the complication mentioned above. If you’ve left it late then the vendor of a product may well be spreading huge amounts of fear, uncertainty and doubt over the more utility form to your own team. They will probably have tried to convince your own team (e.g. in this case Systems) that they have no future in this “future world”. If they’re canny, they would have encouraged articles in related trade press spreading the same message. This is all designed to get your own people buying the vendor’s product rather than adopting to the new world. It’ll make it much harder for you to overcome any “loss of political capital” if you’re late to the conversation. You can try and say, “don’t worry but will invest in retraining” but this is also where any past Machiavellian efforts or brutal corporate action will bite you in the bottom. If there exist doubt in your trustworthiness then they won’t follow but will resist. Whatever you do, as annoying as it is to be confronted by this – remember one thing. They are behaving perfectly rationally. You are the wally who left it late to deal with a highly anticipatable change and therefore caused the mess. If you want someone to blame, buy a mirror. Unfortunately, we all make mistakes. This is also why you must always consider not only our action today but the future consequences of such action. Having that trust can bail you out of your own facepalms.

However, we’re not in that position with the LFP scenario yet. We shall assume we have a team who can have an open and honest conversation with. We can anticipate where the future is heading with the map and we’re going to share this. We’re going to have that discussion and invest time and money in bringing our systems and security teams into this new world with new skills and new capabilities. We leave no-one behind and we certainly don’t turn up five years late to the battle.

Alas, we might still have a problem. There’s potentially another source of inertia and it’s a powerful one. The board. We know they have a concern but aren’t going to raise an objection ... yet. Now that can either be just a general question on the change or could be hiding something else. We need to explore that. It could be as simple as “Data for past success counteracts” i.e. they’re used to us operating in one way and we’ve not been down this path. It could be concerns over “Loss of existing financial or physical capital” because we’ve invested in data centres. It could be a question of political capital or that one board member has looked at the model and wants to focus on short term expected return rather than building a future. Whatever the cause, you need to find it and to fix it. That’s one of your many jobs as the CEO. There are also many other forms of inertia and so for completeness, though not necessarily relevant in the LFP scenario, we will quickly run through the other types of inertia: -

“Threat to barriers to entry”, the fear that a change will enable new competitors. Whilst that fear may be justified it is often unavoidable Change, that is already happening in the market and outside of your control. You cannot ignore it.

“Cost of acquiring new skillsets” is one of the more bizarre sources of inertia because the cost will often increase especially in a punctuated equilibrium where a shortage of skills is a common consequence. There are many ways to counter this and mitigate the cost - assuming this is done in a timely fashion - from developing in-house, use of conferences to creating centres of gravity to attract talent. 

“Suitability”, one reasonably common form of inertia comes in the form of questions over whether it’s ready e.g. ready for production, is the market ready for this, are customers ready? The best way to counter is through weak signals and examination of the components (e.g. using the cheat sheet). 

“Lack of second sourcing options” is often a valid concern but can be used to disguise other forms of inertia. Back in 2008, it was not uncommon to hear a company say without irony something of the form - “We’re an Oracle shop. We’ve thought about using public cloud but were worried about the potential for getting locked in with Amazon. We want to see more choice”. If you can overcome the irrational side of the debate then this is all about supply chain management, trade-offs and use of standards where appropriate. There are a wide range of techniques to mitigate it.

“Lack of pricing competition” is another reasonable concern which really points to how well functioning is the market. Do we have single or multiple vendors? What are the switching costs? 

“Loss of strategic control” is usually wrapped up with fears of letting go and in the cloud space led to the idea of “server huggers”. However, there are some valid aspects to the concern around buyer vs supplier relationship, assuming you have a market that is industrialising to a commodity. Most of this can be overcome with strategic planning and examination of different scenario i.e. what should we do if the supplier rapidly increases price etc.

“Declining unit value” is usually a business concern related to a desire to maintain the past. The only way to counter is through awareness of evolution and how markets aren’t static. You need to look at alternatives opportunities, think Charles Handy’s 2nd curve and try to avoid the spiral of death through endless cost cutting to recreate the past.

"Data for Past Success counteracts", an extremely common form of inertia particularly if the company has been successful. Most companies build up a significant stock of data that informs them how successful the past was. This will often be used to argue that the future will be more of the same. You need to take a leaf out of portfolio management and realise that your portfolio will change over time. Options analysis and risk management approaches can be useful here to avoid having all your eggs in one “past” basket.

“Resistance from rewards and culture”, hugely problematic for most companies and easily exploitable by competitors. Performance bonuses linked to selling an existing product set can be a significant source of inertia and weakness. You can manage this through HR by using higher rewards for adaptation, education, longer term thinking and promoting greater situational awareness.

“External financial markets reinforce existing models”, another common but tricky form of inertia to deal with. As discussed in the previous chapter, it’s important to understand your context and the role being played by others such as fund managers. There are certain techniques that can be deployed here to overcome market inertia including spinning a future story. 

Where are we?
We have a map of the landscape, we’ve applied basic economic patterns to anticipate change, we can see opportunity in co-evolved practice and obstacles in inertia to the change, we have financial models and understand how we can go for a higher short term expected return or trade some of this for building a future position. Though we have inertia, we have an idea of the types and how to deal with it. Our awareness of the situation is expanding. This is good. This is how it should be.

In the above, I specifically state “anticipate change” because we cannot predict evolution over time (see chapter 7, section “the trouble with maps”). We must use characteristics or weak signals to give us an idea (a probability) of when the change will happen or even if it’s occurring today. Mapping is all about probability rather than time; the uncharted space is uncertain and the industrialised space is more known. To predict over time would mean we could say “in 453 days this [activity or practice or business model] will change from A to B”. As far as I’m concerned that is straying into the realm of charlatans, crystal ball fanatics and soothsayers. 

I often hear people counter with vague notions of time e.g. “at some point in the future”. That is not predicting over time as time requires a “when”. I cannot, nor have I ever been able to predict evolution over time. In over a decade of using mapping to explore economic systems then as far as I’m aware you can only anticipate the change and refine the “when” of evolution using characteristics (as above), weak signals and probability (including information markets). Of course, I’m fully aware that I have my own inertia caused by my past success with mapping and that the subject itself will evolve. Someone else may well find a way to map over time. I will no doubt dismiss it and be proved wrong. I do hope I have the wit to use my own tool on myself at that time. “When” will this happen? As I said, I can’t predict over time and the weak signals aren’t even strong enough for me to guess.

In terms of the strategy cycle, we’ve observed the environment and moved onto orientating around it with doctrine such as “manage inertia”. However, let us explore the cycle a bit further.

Getting Primitive
In this section, I’m going to look at how we organise around the LFP scenario and put down a few markers for strategic play that we might consider. Once I have a general outline, I’ll often loop around this several times with others to refine, to create alternative scenarios, to alter course before finally deciding upon a choice of action. When it comes to organisation then I use not only use a self-contained cell based structure (i.e. small teams) with the right aptitudes (finance, engineering, marketing) but also for the last decade I’ve been using attitude (pioneers – settlers – town planners). 

I note recently that Kent Beck has been discussing a model called 3X – eXplore, eXpand and eXploit. This is excellent as there’s nothing like independent discovery to give a bit more substance to a topic. Pioneers eXplore, Settlers eXpand our understanding and Town Planners eXploit by industrialising with each group operating and maintaining its own space. This all deserves a good hat tip to Robert Cringely and his marvellous book “Accidental Empires”. Anyway, back to the map and we will focus on the platform change as we’ve been previously building our own systems and I’ll assume that we know how to do this. In figure 206, I’ve outlined the two obvious cells that we need to consider.

Figure 206, The structure


One cell refers to town planning around the platform. Obviously, someone else is providing the platform as a utility service to us but we still need to make sure we create highly industrialised process around monitoring the platform, access control and how much we’re getting billed. This is not something new and chances are that provider will be offering tools to make it easy. However, there are a new set of practices that will develop around the financial cost of a function, re-use of functions and how we monitor the code itself. This is not so much related to the platform itself but how we use it. In much the same way, the practices that changed industry were not so much about whether we paid the right electricity bill but how we used it to do other things. What those new practices will be is somewhat uncertain. I can guess based upon experience of running a code execution platform (i.e. serverless environment) with Zimki in 2005. But it’s no more than a guess.

We can also at this point start adding some primitive gameplays. For example, we could - if we have decided to play a legacy game and not build for the future market – spread fear, uncertainty and doubt over the utility platform. Alternatively, we might play an open play around the co-evolved practices to help them evolve more quickly. We might do this to create a name for ourselves in this space, to build a “centre of gravity” around the skillsets needed in anticipation that this will become a lucrative market for us. I’ve outlined these two very simple plays in figure 207.

Figure 207 – Two basic plays


So, complying with my natural bias, I’m going to focus on creating a future position and market rather than exploiting a legacy position. I can do this because I haven’t yet left it too late to make that choice. I’m going to try and own those future co-evolved practice, build a centre of gravity and use open source to achieve this. I’ll accept the lower expected return in exchange for a stronger future position and not building up my legacy. Now going to add my structure around the platform space onto my LFP map. See figure 208.

Figure 208 – Future orientated LFP map


The first thing is the map is a bit messy and things seem to be in the wrong position i.e. somehow my emerging architectural practice is above my microsite in terms of user needs but to be honest the client hasn’t mentioned anything about this changing world. This is fine. All maps are imperfect representations and with a bit of fiddling around and moving pieces then I can create something which appears to represent the situation more clearly. See Figure 209.

Figure 209 – A clearer map.


This fiddling around with maps is all part of exploring a space. It allows us to challenge assumptions with others, to collaborate across multiple aptitudes (finance, engineering etc) and even attitudes (pioneers, settlers etc), to apply past lessons learned and come up with a common understanding. We can now flesh out the space a bit more and being mindful of our current capabilities (that’s assuming you know how many pioneers, settlers and town planners you have – most don’t) create the structure we’re going to use – figure 210.

Figure 210 – the structure.


Looping around and common problems
We now understand the landscape, the trade-off between short term expected return and future position, the structure needed, the main sources of inertia and some basics on the gameplay. Our situational awareness is constantly improving. The next thing we do is loop around the strategy cycle again and refine it. But isn’t that time consuming? Yes.

With experience, for a business that has a map then a single loop (what we’re covering in this chapter) could take anywhere up to 30 mins. Add a couple of loops, discussions between people and you could have easily blown an hour or two before you commit to the choice. Add to that the additional hour or so it might take to create that first map and the financial models and yes, you could be looking at half a day. That is of course an incredibly long time to go from concept to decision to act. 

To be honest, I can’t think of many examples where it has taken anywhere near that long. There are a few M&A activities (covering hundreds of millions) where I have taken a day or so but that is the exception and only occurs in fields that I’m not familiar with. Being locked in a room or given people to interview and asked the question “should we buy this company” often involves extracting information from others. Most of the time was spent developing an understanding of the landscape because very little existed. However, we should acknowledge that mapping does take some time and I don’t know how to make it faster. It’s one of the obvious weaknesses of mapping versus gut feel which can just be instant.

Another problem is complexity. First, mapping exposes the complexity of what exists. In the example of Themistocles SWOT, it’s usually obvious to everyone that you should use a map not a SWOT to run a battle. We understand this because we’re familiar and comfortable with geographical maps in much the same way that people in business are comfortable with SWOTs. However, there is a downside which is a map is inherently more complex than a 2x2 such as a SWOT and this makes management more challenging and requires more thought. But what if you’re not familiar with maps.

Let us consider how Vikings use stories for navigation. Put yourself in the role of a Viking Navigator having spent 20 years learning epic tales and being trusted with steering the boat. Imagine someone says to you that you don’t need a story but you could use a map. The first time someone shows you a map or you will see is diagram with dots on it. You will have difficulty in understanding how can such a thing replace your twenty years of epic tales. You’ll tend to react negatively because of experience i.e. you know the stories work. You’ll have a natural human bias to that which is comfortable and previously experienced. The map will be unfamiliar even alien and its complexity will overwhelm you. It will take many points of exposure and realisation that a map would have been better than a story before most will put the effort and thought necessary into using it. 

Go back to the Themistocles SWOT. Imagine if battles had been run with SWOTs and someone came up and said, I’ve got a map thing which might help. The reaction will be overwhelmingly negative to begin with because it’s unfamiliar (not a SWOT) and complex. It can also threaten those who have spent 20 years learning how to “Battle with SWOTs” or “Navigate with stories” because at its heart, it is basically saying that they’ve been meme copying all this time without understanding. Into this mix you can throw in the issue that exposing the complexity also exposes assumptions made and opens decisions to more challenge - another thing people don’t tend to like. You’ve got quite a mountain to climb with mapping. Which is probably why those with a military experience (and some familiarity with situational awareness) have an easier path to mapping. The worst cases are normally those who have no military background, 20 years or so of “strategy” experience and an MBA.

However, let us assume you persevere, you create a map, you loop around the strategy cycle and over time (and hour or two, possibly more) through the application of thought then a context specific path becomes clear. What now? I tend to double check it as a final step. I find that using a business model canvas is brilliant for this as by that stage you should have everything you need to fill it in. Let us assume you decide to play the future game and roll the dice.

Opportunities multiply as they are seized.
You’ve decided to build the LFP system using it as a springboard to develop a future position around the co-evolved practice that will emerge in the platform space. You’ve overcome your internal inertia through discussion, formed the teams and explained this to the board. You’ll sacrifice some short term expected return for a future position with an eye to repackaging the solution and selling it to others along whilst developing a new practice in the co-evolved space. You roll the dice and it comes up ... outcome 2. Oh, damn.

The LFP system isn’t going quite as well as we might hope. Fortunately for us, we didn’t build in the in-house variant otherwise we’d be losing money right now and our discussions with the board might be getting more complex. The problem with our options analysis is we didn’t price in any variability and risk appetite. The in-house variant was riskier because it not only had the highest expected return but the lowest - there was a wide spread. In this case outcome 2 is a net loss. We can chalk that up as a future learning lesson (or in my case – past painful lesson). However, let us compare what happens with outcome 2 in both variants. Let us say that despite things not going so well both marketing and engineering have dived in and come up with proposals. There are two options on the table. So, which, if any, do we choose? 


1) Engineering says they could improve code efficiency by 75% for $350K

2) Marketing say they could add 400k extra microsite visitors for $150K each month

Let us go through each variant. In figure 211, I’ve added the financial impact for the proposals on the in-house variant.

Figure 211 – Financial Impact on in-house variant


I’ve started with outcome 2 (what is happening) as the base case and simply added the change. The first thing to notice is that the development proposal doesn’t make the case better, it makes the finances worse. Why? Because the cost is already sunk and spending money on refactoring doesn’t improve the financial case as there is nothing to be recovered through code efficiency. The only possible saving grace would be through releasing some hardware to get a quicker sale of it and less depreciated value. That’s in the realm of wishful thinking in most cases. As said as it is to say, it’s often difficult to justify spending more money on a refactoring effort in such circumstances. The marketing proposal gives us some uplift. At least it recovers some of the pain. Our final return is still below our normal expected return but we’re saving a bit of face. The combination of both development and marketing gives us the benefits of marketing combined with the loss of development. It’s far better to just do the marketing.

Ok, so let us repeat this exercise but now look at variant 2 – the public platform play. I’ve created the model in figure 212.

Figure 212 – Financial Impact on public platform variant


The first thing to note is we’re in much better shape because we didn’t have that initial sunk cost of investment. But then something odd happens. If you look at the development option, by spending money on refactoring then we make a better return! A huge return! Hang on, how’s that possible? Well simply put, we’re paying for consumption of our utility code execution environment (such as AWS Lambda) based upon use. You make the code more efficient then you pay less. There is suddenly a financial reason for refactoring code. There are many other benefits with such platforms around consuming services and code re-use but the changes to the way we write, refactor and monitor code are significant. This is what co-evolution is all about and in this case, it’s the collision between development and finance.

The second thing to note is that marketing is a net loss. How is that possible when in the in-house variant its positive? On a consumption basis, the cost (including not only marketing but operation) for each new user marketing acquires significantly exceeds the revenue they create and so it’s a loss at this price. But in the first variant, then most of the costs have already been spent in the initial upfront investment. In which case given we’ve already spent most of the money, we may as well spend a little bit more to get the revenue. Hence the divergence here. The marketing proposal makes sense in the in-house variant because you’ve already blown most of the cost but it doesn’t in the second because there’s direct linkage of actual cost against revenue.

But hang on, the third option of both marketing and development looks better than all of them. How can that be? In this case, the reduced cost of each user on the service (because of refactoring i.e. the development effort) means that the total cost per user (i.e. marketing plus operational) is now less than the revenue they create. Hence the last option gives us the best choice and that’s where we invest. This shift towards this utility platforms and billing at the functional level fundamentally changes your entire investment approach in projects. Refactoring suddenly becomes a financial consideration. The true costs (not just acquiring but operating) of marketing are exposed. Where you invest changes. Hence, we’re already starting to experience some of those co-evolved practices and this looks a big change. In fact, I know it’s going to be enormous which is why I created that first platform back in 2005 but as you’ll come to learn, these opportunities jump at you when you embrace the future.

But, why didn’t I continue and rebuild the platform after the parent company decided it wanted to go elsewhere? Well, I spent a bit of time working on printed electronics and then met an astronaut but that’s the next chapter. The one thing I want you to remember from this discussion is that spreadsheets are wonderful but they’re not a substitution for situational awareness. Loop through the cycle, understand your landscape, anticipate change, manage inertia, structure around it and then apply tools, choices and biases to help you decide where to act. Maps aren’t a substitution for thought, they’re an enabler of it. By now you should be thinking of how you can use maps to communicate across finance, engineering, operations, purchasing and strategy from anticipation of change to organisational structure. As you'll discover soon enough, this is only the beginning, 


April 18, 2017

Screen Select Lives! by Feeling Listless

Film As you know, I recently signed again with Lovefilm-by-post having become tired with the tedious wait for the few films I actually want to see to be uploaded to one of the streaming services (as opposed to those entertainments which I'll watch because they're there). Surprisingly they'd retained my previously viewed items from the six months before back to 200 titles.

I've always been slightly cheesed off about brevity of that list. Back when Lovefilm had its own website and before that ScreenSelect, it was possible to look backwards right through the archive, be able to check if you'd seen a title before. Now, it seemed, anything before 200 was dropping off, just a year or two going backwards.

Well. Idling online late the other night, I was startled to discover that the entire archive is still there. Amazon still retains the entire list of everything I've watched either via shiny disc or streaming right back to 2004, albeit in their own format.

For the three people reading this for whom it'll be of interest, here's how I found it.

 At the top of the page under the search box it a link for "Stuart's Amazon" replacing my name with yours. Click this.


Now you'll see a link for "Improve Your Recommendations". Click that too.

Log-in. That brings a page which defaults to items you've purchased. To the left there's a link called "Videos you've watched". Click that.


You'll now see a list of all the discs you've had by post and watched through Amazon Prime in reverse chronological order.

This is where is gets a bit tricky. Scroll to the bottom of the page and you'll see a yellow "next" button with 1-15 to the left. Click that.


At this point I assumed that this would just take me backwards through the two hundred. But I was wrong.  It went even further.

 Now, look up at the address bar. You might need to scroll a bit but it should contain something like the following text:

https://www.amazon.co.uk/gp/yourstore/iyr/ref=pd_ys_iyr_next?ie=UTF8&collection=watched&iyrGroup=&maxItem=30&minItem=16

As you can see at the end, there are instructions to tell the website which section of the dvd list the show, fifteen items, in this case items 16 to 30. This is the tricky part.

Feeling myself backwards, I first tried to look at the page with items 185 to 200. So I changed the numbers thusly:

https://www.amazon.co.uk/gp/yourstore/iyr/ref=pd_ys_iyr_next?ie=UTF8&collection=watched&iyrGroup=&maxItem=200&minItem=185

That worked. So decided to go further.

https://www.amazon.co.uk/gp/yourstore/iyr/ref=pd_ys_iyr_next?ie=UTF8&collection=watched&iyrGroup=&maxItem=2000&minItem=1985

And was amazed to find a series of items from the Lovefilm era, from Michael Clayton to Spider-Man 3. How far backwards did this go?

https://www.amazon.co.uk/gp/yourstore/iyr/ref=pd_ys_iyr_next?ie=UTF8&collection=watched&iyrGroup=&maxItem=3000&minItem=2985

Produced a blank page indicating I hadn't watched anything, so I began working backwards in 50 item intervals until, magically I reached:

https://www.amazon.co.uk/gp/yourstore/iyr/ref=pd_ys_iyr_next?ie=UTF8&collection=watched&iyrGroup=&maxItem=2604&minItem=2590

And the start of my viewing list, right back in 2004, the ScreenSelect days with the "previous" button at the bottom of the list allowing me to go forward in time.  A record of my dvd viewing for the past decade and a half, beginning with my French New Wave obsession.

There they were, the first discs I ever rented, Keanu Reeves actioner Chain Reaction, The China Syndrome and anthology series "Perfect Crimes" with its episode by Steven Soderbergh.

Much of this first year is recorded already on this blog in the ultra tedious Review 2004, but everything after that is like a diary of my viewing tastes which have always been eclectic and reminded me that back then I was just as likely to watch something archival or back catalogue as something new.

That's something I'm trying again.  To be continued.


Elizabeth Wurtzel on Girls. by Feeling Listless

TV For the Washington Post. She saw a lot of herself in there:

"I am scared of my 20s. That decade took me down. My Room 13 is being 25 again. I spent every day getting over the night before. I lived downtown in New York City, in every neighborhood south of 14th Street, because I moved all the time, as I ran rampant through life. I had boyfriends who broke lamps to make a point. I ordered in morning coffee at 2 in the afternoon. I did not understand a schedule. My heart had a black and blue mark on it all the time."
Elsewhere, here's Lena Dunham on the final episode. I thought initially it was a bit "These Are The Voyages" in that episode nine felt like the structural end of the series, but on reading this I can appreciate that having episode 9 as the climax isn't very Girls. Episode ten and that final shot it.  Marnie spin-off please?


The sudden eruption of news by Charlie Stross

Theresa May, UK Prime Minister, has just announced her intention of calling a UK-wide general election to be held on June 8th. (She will have to bypass the 2011 Parliament Act, achieve a 2/3rds majority, or call a vote of no confidence in her own government in order to do it, but one way or the other, she can make it happen.)

Parliamentary boundary changes coming into effect in 2018 do not apply; this election will be carried out in existing constituencies rather than the downsized number due for a 2020 election.

May currently has a roughly 20% lead in opinion polls and faces disorganized opposition, except in Scotland (which, with roughly 10% of the total seats, can safely be ignored: she risks losing at most a single sitting MP north of the border—her only one).

Predictable side-effects would include the next UK general election scheduled by the Parliament Act (2011) being pushed back to June 2022, three years after the due date for the conclusion of Article 50 negotiations over UK departure from the EU (rather than 13 months after Brexit-date).

I have some speculations about the big picture and what's going on, but before I unleash it on the blog I want to see what the hive mind thinks.

(Previously, I intended to blog a blue-sky SFnal world-building question this week, but hey: politics just farted.)


April 17, 2017

My Favourite Film of 1903. by Feeling Listless



Film  Some brief notes on genre. Again.

The Great Train Robbery is popularly thought of as the first Western or at least a pre-cursor to the modern western. But in production, this was not in the minds of the film makers. The situation is rather more complicated and although for various reasons it’s possible to label it a “western” it’s also a number of other things.

Genre tends to be defined in two ways, semantic and syntactic. Semantic refers to how the film looks and the tropes of the genre are in what we can see. If everyone has guns, hats and horses in a desert it’s a western. If it’s guns, hats and cars in the city it’s gangster film.  If it's phasers, environmental suits and spaceships its sci-fi.

Syntactic is about the structure of the story. A romantic comedy is a meet cute with various obstacles then inhibiting the couple from coupling until they do. A hyperlink film has lots of different plots, with lots of different demographics of people mixing unexpectedly across numerous geographic locations. A walking film is a road movie on foot.  If everyone dies at the end, its a tragedy.

There are also two ways of deciding the genre in which a film fits. The first is to watch a “corpus” of similar looking films, looking for commonalities, “tropes” and then dismissing titles which don’t match and seeking others which do. Semantically that’s how The Great Train Robbery became thought of as a western due to the tropes we’ve already discussed.

The other is through cycles, in which a film is popular, does business and so a lot of similar films are made to capitalise. Found footage films are a recent example, as is teen horror in the wake of Scream or “torture porn” after Saw. Usually these genres are actually presenting a new twist on some old format and so antecedents will show themselves.

Which is why The Great Train Robbery is so complicated; in production it was actually within contemporary a cycle of “heist” or caper films (see also A Daring Daylight Burglary) and is even still listed as such at the Wikipedia. It fits the syntactic tropes of planning a robbery and carrying it out ala the Oceans films, albeit over a slender run time.

It’s also a period drama, since it’s recalling recent history, the production design recreating a landscape and people from just a couple of decades previous not unlike a 2010s filmmaker setting their film in the 80s. Some of the people watching The Great Train Robbery would recognise the images in their own memories.

The “western” didn’t exist as a film genre when this was made, the term not being used until 1912 and even then it would be decades before directors set out to make a “western” rather than a film which happened to be set in the 1880s in the American West. The final shot of the film is up front on the genre’s Wikipedia page.

Which is why I find film studies so interesting. Nothing is fixed, everything is in flux and preconceptions can be annihilated with a new piece of information or thought and how you approach viewing films changes. Watching The Great Train Robbery as a heist or period film gives it a completely different texture, making it even more entertaining.


April 16, 2017

Romola on Feminism. Lot's of other things. by Feeling Listless

Film Eve Wiseman talks to the next Doctor and recounts this horrifying anecdote about the filming of Dirty Dancing: Havana Nights:

"Romola Garai was 17, standing in her underwear while a female producer pointed at her thighs and told her: “This isn’t good enough.” She was weighed in and out every day, with a dietician flown to Puerto Rico to make sure she stayed underweight. It was her first Hollywood studio film, a sequel to Dirty Dancing, and it would prove to be her last. “It screwed me up for years. Not only did it completely change how I felt about my body, but I felt like I’d failed because I hadn’t fought back. I felt complicit, because I didn’t say no. I signed off on Photoshopped images and felt terrible for perpetrating this… lie.”"


April 15, 2017

The Pilot. by Feeling Listless



TV "She was fat. I'd fatted her."

Last night I watched the Adam Sandler film 50 First Dates. Most of Sandler's oeuvre is awful but every now then even he manages to turn out an averagely decent piece of work, and at the centre of 50 First Dates is the very sweet story of a guy who falls in love with a girl played with Goldfield's Syndrome, played by Drew Barrymore, who wakes up every morning having forgotten everything which happened the day before going backwards to the day of an accident. As with The Wedding Singer, there's real chemistry between the two leads which shows that with the right material Sandler can be a likeable lead.  Unfortunately the whole rest of the film is a non-PC shitshow with cruel jokes about people with mental illness and an extremely racist performance from California born Rob Schneider as a Polynesian.  So all the while you're grinning through as Adam and Drew make googly eyes at each other, you're also aware of just how awful much of the surrounding tissue is.

That's how I probably felt about Doctor Who's The Pilot because tossing that fat joke into Bill's opening scene did little to warm me to her and so undercut whatever the rest of the episode was trying to do.  I've had the structure of the line circulating backwards and forwards trying to decide who the joke is supposed to be on, and it keeps returning me to Bill's misfortune at having accidentally made a girl she fancies fat, no longer beautiful and so therefore undateable, with a side order of cheap humour about the intelligence of models which is precisely the kind of garbage the likes of Chrissy Teigen have to deal with.  Admittedly Donna could be cruel on occasion, but as the Doctor has said somewhere in the past, first impressions count and this threw me.  If the idea was to make her a human being who says stupid things, fine,  But the ad campaign talks about this being a show about heroes, and its simply not right for someone who's supposed to be a children's hero to make fat jokes especially if it's a child who is currently being bullied at school for being overweight themselves.

Good evening, welcome back and sorry that I can't be as effusive as everyone else.  There was a much derided column from The Guardian the other day about how Doctor Who's become stale and although I took issue with the writer Abigail Chandler about Robots of Sherwood, which was my favourite episode of an otherwise often unwatcheable Season Eight, there wasn't a lot in there I could disagree with.  After the patchy season nine, an only decent Christmas special and a rubbish Christmas special, Steven Moffat feels like a creatively spent force who's lost focus on exactly what the show he's writing is supposed to be (sideways glance at Sherlock).  Perhaps because of this, I was the least excited I've ever been about a season opener and despite going through the motions, including wearing my Eighth Doctor t-shirt to work this afternoon (not that anyone cared enough to mention it).  But my heart just hasn't been in it.

Does The Pilot help?  Well, yes, it's fine.  Although it does at least subtly change the format again from the lengthy scene thing Moffat's been experimenting with these past few years to something more akin to earlier years, it's not the massive game changer we were promised.  Perhaps the show is just too old and has too many different iterations for that to happen.  But there are enough sparkly moments in here to suggest that the writer/producer appreciates some of the weaknesses, especially in the Doctor's characterisation, which we've had to endure recently.  The fact that I'm writing this review shows that it was interesting enough for me to care, something which wasn't certain.  Then again, every season I wonder if I'll bother writing these things and yet here I am again on a Saturday night developing laptop hunch and creating lines on my arms were they're resting on the edge of the table.  I know I could buy one of those rubber rest things, but they bring me out in a rash.

The structure of the episode is quite different to usual.  The first half develops across what must be six months as the Doctor casts himself as Frank in his own version of Educating Rita.  This is the stronger passage as its implied that this soft reboot will see an Earth bound Doctor working out of a university fighting aliens with Nardole as his butler and Bill as the new Jo (even if the bit with the festive mat implies he's taken at least one trip in the TARDIS).  Then halfway through and unlike any of the opening episodes since the show came back, the Doctor whisks his new friend through space and time.  This gives everything some scale whilst simultaneously (and not unlike the first episode of Quantum Leap) explains the premise of the show for potential newbies so that the second instalment can be largely free of the usual explanations.  No "Is this a different world?".  We've done that.  No "Who are the Daleks?" We've done that too.

Yet for all that I'm not satisfied.  The idea of companions ignoring rote reactions to the TARDIS and the Doctor showing off have become so cliche itself now that it would probably have been more surprising if she'd gone through the motions.  Throughout there's a constant sense of trying to undercut the magic.  Big lighting reveal of the interior, joke about it looking like a knock through and a kitchen.  Contrast that to the Ian and Barbara's faces in Doctor Who's actual pilot (depending on which moutning of the second half you're watching) and there's no contest.  Admittedly I cheered along with the Doctor and Nardole when Bill finally said that it was bigger on the inside, but it does work against one of the series best moments.  Sometimes the cleverest thing is to not to try to be too clever.

That goes too for whatever lies behind JJ Abrams' box or as is the case here, vault.  I've never particularly been particularly keen on those stories in which the Doctor himself is a mystery, or rather there's a mystery about something he's doing rather than who he is.  Whilst it's true that like Eccleston we're wondering what's been happening to the Doctor since last we saw him and how he's ended up in this predicament, it's always tricky pointing towards a viewpoint character and then deliberately omitting narrative information about them as Moffat is trying to do here, especially when Bill never quite feels like the protagonist and can't be.  Whatever the Doctor's mission here is a mystery simply because you haven't shown us the initiating scene.  Perhaps if Bill had indicated any great curiosity about what's in the box, sorry, vault, herself it would have provided a useful counterweight, but Moffat doesn't want us to care too much about it yet, so she doesn't either.

Nardole's presence still doesn't make any sense either.  As we're reminded through Bill of the healthful characterisation that recent companions have enjoyed, not to mention proper introductions, he's an anomalous blank.  Kind of amusing but for the most part stripping the Doctor of some of his eccentricities and whimsy.  Having the Second Doctor as companion to the Third sounds fine in theory, and didn't we all enjoy The Three Doctors, but for the most part he seems to exist because Matt Lucas said he'd like to be in Doctor Who again and everyone supposedly likes him.  He's Handles with limbs.  He's Kamelion unfettered by Anthony Ainley's availability.  I'll keep the faith for now, Lucas's chemistry with Capaldi is obvious and it's possible the next eleven episode will include something which'll make me love him, but at the moment, yeah, ok, shrug emoji.

Like Rose, The Pilot contains a pretty low key antagonist of the week, the stuff of annual prose stories and Class, perhaps from the same genome as the the waters of Mars.  The CG isn't quite a seamless as perhaps you'd like to be, but the shots of actress Stephanie Hyam underwater and breaking the surface are creepy, especially from side on.  Taking her to the middle of a Dalek war was a logical way of working in the Friend from the Future footage, which then, curiously, mostly doesn't appear.  Which somewhat makes sense, the episode would literally have had to stop to accommodate pre-shot material everyone has been, but it is distracting to be sat waiting for them to turn a corner and straight into it, especially since Bill is wearing her accidental tribute to Prince t-shirt.  Where does that leave Friend from the Future?  A dream?  A side step within this scene perhaps occurring during a Nardole cutaway?

The episode is at its best during the kisses to the past.  The photos of Susan and River on the desk, the sonic screwdriver collection in the pen pot, the Movellans (more thrown away than the pre-season trailer perhaps had us believe).  Unless its an anniversary year, Moffat's is reticent about these kinds of references in the past, almost embarrassed, but every franchise is enriched by its mythological tapestry and should be happy to embrace it.  After watching tons of Star Trek lately, I'm pretty much convinced taking the time to create back story and baggage are why shows like this have the greatest longevity.  Here the Doctor's talking to photos of his Granddaughter and late wife (ish) and at no point are we told who they are which is as it should be.  Like I said, mysteries about who a person is are always more interesting than about what they're doing and why they're doing it.

Recent Capaldi continues to be character he clearly wanted to be from the start but was boxed in, a patrician at times,  with elements of the Henry Higgens and Lear's fool.  A good man, in other words, someone who simply wouldn't react in the same way if he was to be faced with The Caretaker or Kill The Moon now, wouldn't insult Danny Moon with such ferocity.  Is this as a result of his loss of memory?  Does it matter?  Initially having Clara's theme under his decision not to mind wipe Bill feels like a misstep, his treatment of Donna surely being the clearer reference, but Journey's End was nearly nine years ago and sometimes television has to assume Netflix or blu-rays don't exist.  The guitar and shades business is still a pain in the arse, but Moffat seems to have tossed that in here as a joke rather than something being pursued going forward.

Having said all of this, it's just possible I'll watch it again and have another reaction entirely, as anyone who read my positive review of Class's first episode will know.  However much the production team want to downplay the fact, having a gay companion is huge and despite some of the line's she's been given, Pearl Mackie's offbeat performance is a refreshing contrast to what's gone before.  Even with that opening scene, this is me singing her praises.  I just wish Steven didn't have such a cloth ear for the implications some of his dialogue can have.  Plus, looking forward, with Michelle Gomez having redefined the Doctor's Time Lord nemesis, having her stuffed into the trailer and then having the John Simm version as some big reveal feels a bit shoddy.  And Frank Cottrell-Boyce is back next week.  Shudder.  Happy Easter!


Subscriptions (feed of everything)


Updated using Planet on 22 April 2017, 04:48 AM