Francis’s news feed

This combines together on one page various news websites and diaries which I like to read.

March 26, 2015

LPSC 2015: MESSENGER's low-altitude campaign at Mercury by The Planetary Society

At last week's Lunar and Planetary Science Conference, the MESSENGER team held a press briefing to share results from the recent few months of incredibly low-altitude flight over Mercury's surface. The mission will last only about five weeks more.


Meet NASA's Winning Asteroid Redirect Spacecraft, and the Asteroid It May Visit by The Planetary Society

NASA has decided to pluck a small boulder off a large asteroid, instead of bagging an entire asteroid outright, the agency announced Wednesday.


One-Year ISS Mission Preview: 28 Experiments, 4 Expeditions and 2 Crew Members by The Planetary Society

This Friday, astronaut Scott Kelly and cosmonaut Mikhail Kornienko will embark for a one-year mission aboard the International Space Station.


March 25, 2015

Soup Safari #18: Lentil at Uncle Sam's Bistro & Restaurant. by Feeling Listless







Dinner. £3.50. Uncle Sam's Bistro & Restaurant, 94 Bold Street, Liverpool, Merseyside L1 4HY. Phone:0151 709 2111. Website.


An exercise in futility by Charlie Stross

A piece of ebook-reader software called Clean Reader has been generating headlines and causing indignation among authors recently:

A new app that allows readers to swap swear words in their novels with sanitised versions is facing a backlash from furious authors, who have accused it of setting a dangerous precedent of censorship.The app, entitled Clean Reader, has been designed to take explicit words out of any book printed in electronic format - with or without permission from its author - to swap them with child-friendly versions.

(I'm not linking to Clean Reader directly—don't want to give them any free inbound Google mojo.)

Mangling an author's text is a clear violation of the author's Moral rights, an element of copyright which is very weak in the United States and very strong elsewhere (primarily in civil law jurisdictions). (The moral right is the right of an author to be identified as the creator of a work, and for the work represented as their creation to be unaltered by other hands, so that the relationship between creator and created work is clear.) Mangling an author's text may be legal or illegal in the USA, depending on whether it occurs before or after sale. After all, I can't stop you buying one of my books and editing it with a sharpie: it's a physical object and according to the first sale doctrine, it's yours to do with as you wish. I may be able to legally stop you modifying an ebook, though: ebooks are not sold but a limited license to download and use them is granted in exchange for money—a fine legal distinction that was borrowed from the software business's tame sharks—and that limited license may permit or deny such usage.

Clean Reader claim to get around this by (a) being a licensed distributor (they provide the app and sell books for it sourced from PageFoundry, a distributor who back-end onto various publishers), and (b) the censorship is performed on the reader device by the reader app, once the book has been purchased and downloaded. There's a bunch of case law around whether or not it's legal to do this to movie rentals or downloads, or legal to skip advertisements in recorded programming on your TiVo—it gets murky fast. But let's suppose they're right and what they're doing ("protect the children! At any cost! From naughty words like 'breast' and 'fuck'!") is legal.

Speaking as an author who deeply resents the idea of his books being mutilated to fit the prejudices of a curious reader's blue-nosed and over-protective parents (hint: I write for adults—if you don't think my books are suitable for your or your child's tender eyes, don't buy them), what can I do about this?

It's worth quoting some correspondence posted on Absolute Write at this point. The PR contact for Clean Reader had this to say, in answer to a public enquiry:

As for how we deal with context, the app does look for specific sequences of letters lick cock, shit, or f--k. But it also requires white space on both sides of the word. So your example of cockapoo would not be blocked by the app. But cock a poo would have cock blocked. There will be times when the app blocks a word that isn't being used as a profanity. Jesus Christ is another example. If a reader is reading the Bible with Clean Reader there will be quite a lot of words blocked; hell, damn, ass, Jesus, etc. The user will have to make a judgement call as to whether or not to use the "Clean Reader" feature with each book. If it's a religious book they may just opt to turn the feature off. Or if it's a book about chickens they may want to leave it off also. But for example, I'm currently reading American Sniper. It seems to have at least one F-word on every page and sometimes multiple per page. It's frankly a little over the top. Otherwise the book is fantastic and entertaining. So even if the app blocks out a word every now and again that wasn't necessarily being used as a profanity, I'd rather deal with that then have to read F--- every page. Those who have written articles about Clean Reader have typically downloaded a book that is riddled with swear words to show examples of how frustrating the book would be with Clean Reader. But I can tell you we aren't selling many of those types of books. I've read several books with the app and I typically only see a word blocked once every few pages. And it's usually pretty easy to get the gist of what was being said. It's just nice to not actually see it.

So. While it might be possible to get my books pulled from that particular distributor, I am more inclined to deal with this idiocy by getting creative with my scatalogical vocabulary.

No more "fucks" freely interjected; instead I shall steal "unclefucker" from South Park.

No more "cunt!" as a free-standing gender-neutral insult[*]; instead it'll have to be "cuntfart!" or "pissflaps!" or "clunge!" (go look it up) ...

... But that's not going far enough.

I am pretty sure there's plenty of context in which the censorbot can be induced to fuck-up a perfectly clean paragraph beyond all recognition, simply by removing words delimited by whitespace. "Chimney-breast" for example, becomes "Chimney-chest". "The cunt line of the mainbrace" becomes "the bottom line of the mainbrace".

How far do you think I can take this?

UPDATE:

Cory Doctorow takes a radically different approach ("I hate your censorship, but I'll defend to the death your right to censor"). I think he's missing the distinction between censorship and editing—that what's happening here is not straightforward "you can't read that" blocking, but actual substitution of someone else's words for my own, subtly or unsubtly corrupting and misrepresenting the author's words. One thing is clear, though: while we're having a doctrinal argument, it's important to keep in mind the essential fact that we both think that Clean Reader users are stupid poopy-heads.


[*] That's what "cunt" is, in Scottish vernacular—usage differs wildly across the anglophone world, and in Scotland it carries much less gendered misogynistic freight than it does in American usage.


Best Animated Character Within a Live Action Context. by Feeling Listless

Film I'll make this brief because there really isn't much to the idea. In recent years there's been much annoyance that the likes of Andy Serkis and Andy Serkis haven't been nominated for acting awards despite their motion captured performance being an important part of the process in creating a computer generated character like Gollum or Kong. Extraordinary pieces of magic like Paddington (which I saw last night and led to this brainwave) lumped into the general special effects categories when the work on them is clearly of a different order to throwing a car at a motorway or what have you.

Last night it occurred to me that what could happen is that the characters themselves are nominated. In other words, the Oscar would go to Gollum or Paddington or Groot as the result of a collaboration and the collaborators who worked on the character as a group would win the award, the voice/actors, designers and animators with the name of the character as the representative element of the achievement (the members of that group decided upon by the production as part of the nomination process perhaps with the director/producers deciding which element they're most proud of).

To separate it from straight animated characters, it would be called something like "Best Animated Character Within a Live Action Context" or something snappier. So that different achievements can be represented, you'd also perhaps only allow one character per film, which would make things tricky for Guardians of the Galaxy.  Also you'd have to limit it to characters who're predominantly CG.  Stuff like cartoon Legolas wouldn't count.

All of this would reduce the discussions about how much of an actor's performance is enhanced by animators, how much of it is truly just about creating a fully computer generated make-up or simply copying a performance.  As to who would actually get the award?  A raffle?  Or would it go to the animation studio?  Dunno.  Actually this still means Andy Serkis wouldn't end up with an award doesn't it?  Oh hum.


Art of the Title inevitably covers 22 Jump Street. by Feeling Listless

Film Clearly the funniest part of 22 Jump Street are closing titles in which we're given a preview of upcoming previews heading off into infinity along with all the logical diminishing returns. Here, in an absolutely massive post, creative directors, producers and the directors discuss the concepts and production:

"When did you begin to think about the end title sequence for the film?

Phil Lord: We had always planned to do something. Originally we ended the movie with Jonah and Channing walking away from Cube saying, “We are never ever going to do this again,” and then we were going to cut to future Jump Street movies with other actors."
The film is now available on Netflix UK and I really do recommend it. There's a clever sense of irony about the whole thing, a you know that we know that you know that we know that ...


My Favourite Film of 2002. by Feeling Listless



Film Much as I love Kissing Jessica Stein, and I love Kissing Jessica Stein, across time I have begun to wonder how its viewed in the LGBT community. After Ellen gave it a retro-review in 2008 and the four participants seemed in general ok with it with a few reservations.




The First Star Clusters by Astrobites

Title: The Luminosity of Population III Star Clusters

Authors: Alexander L. DeSouza and Shantanu Basu

First Author’s Institution: Department of Physics and Astronomy, University of Western Ontario

Status: Accepted by MNRAS

First light

A major goal for the next generation of telescopes, such as the James Webb Space Telescope (JWST) is to study the first stars and galaxies in the universe. But what would they look like? Would JWST be able to see them? Recent studies have suggested that even the most massive specimens of the very first generation of stars, known as Population III stars, may be undetectable with JWST.

But not all hope is lost–one of the reasons why Population III stars are so hard to detect is that, unlike later generations of stars, they are believed to form in isolation. Later generations of stars (called Population I and Population II stars) usually form in clusters, from the fragmentation of large clouds of molecular gas. On the other hand, cosmological simulations have suggested that Population III stars would form from gas collected in dark matter mini-halos of about a million solar masses in size which would have virialized (reached dynamic equilibrium) by redshifts of about 20-50. Molecular hydrogen acts as a coolant in this scenario, allowing the gas to cool enough to condense down into a star. Early simulations showed that gravitational fragmentation would eventually produce one massive fragment–on the order of about a hundred solar masses–per halo.  This molecular hydrogen, however, could easily be destroyed by the UV radiation from the first massive star formed, preventing others from forming from the same parent cloud of gas. While Population III stars in this paradigm are thought to be much more massive than later generations of stars, they would also be isolated from other ancient stars.

However, there is a lot of uncertainty about the masses of these first stars, and recent papers have investigated the possibility that the picture could be more complicated than first thought. The molecular gas in the dark matter mini-halos could experience more fragmentation before it reaches stellar density, which may lead to multiple smaller stars, rather than one large one, forming from the same cloud of gas. These stars could then evolve relatively independently of each other. The authors of today’s paper investigate the idea that Population III stars could have formed in clusters and also study the luminosity of the resulting groups of stars.

Methodology

Screenshot 2015-03-17 08.03.28

Figure 1 from the paper showing the evolution of a single protostar in time steps of 5 kyr. The leftmost image shows the protostar and its disk at 5 kyr after the formation of the protostar. Some fragments can be seen at radii of 10 AU to several hundred AU. They can then accrete onto the protostar in bursts of accretion. The middle time step shows a quiescent phase. There are no fragments within 300 AU of the disk and no new ones are forming so the disk is relatively smooth–the ones that already exist were formed during an earlier phase raised to higher orbits. The right most image shows the system at 15 kyr from the formation of the protostar, showing how some of the larger fragments can be sheared apart and produce large fluctuations in the luminosity of the protostar as they are accreted.

The authors of today’s paper begin by arguing that the pristine, mostly atomic gas that collects in these early dark matter mini-halos could fragment by the Jeans criterion in a manner similar to the giant molecular clouds that we see today. This fragmentation would produce small clusters of stars that are relatively isolated from each other, so they are able to model each of the members in the cluster independently. They do this by using numerical hydrodynamical simulations in the thin-disk limit.

Their fiducial model is a gas of 300 solar masses, about 0.5 pc in radius, and at a temperature of 300 K. They find that the disk that forms around the protostars (the large fragments of gas that have contracted out of the original cloud of gas) forms relatively quickly, within about 3 kyr of the formation of the protostar. The disk begins to fragment a few hundred years after it forms. These clumps can then accrete onto the protostar in bursts of accretion or get raised to higher orbits.

Most of the time, however, the protostar is in a quiescent phase and is accreting mass relatively smoothly. The luminosity of the overall star cluster increases during the bursts of accretion, and it also increases as new protostars are formed. The increasing luminosity of the stellar cluster can make it more difficult to detect single accretion events. For clusters of a moderate size of about 16 members, these competing effects result in the star cluster spending about 15% of its time at an elevated luminosity, sometimes even a 1000 times the quiescent luminosity. The star clusters can then have luminosities approaching and occasionally exceeding 108 solar luminosities. Population III stars with masses ranging from 100-500 solar masses on the other hand, are likely to have luminosities of about 106 to 107.

These clusters would be some of the most luminous objects at these redshifts and would make a good target for telescopes such as ALMA and JWST. We have few constraints on the star formation rates at such high redshifts, and a lot of uncertainty in what the earliest stars would look like. So should these exist, even if we couldn’t see massive individual population III stars, we may still be able to detect these clusters of smaller stars and gain insight into what star formation looked like at the beginning of our universe.


March 24, 2015

Dana Chivvis on Serial. by Feeling Listless

Audio Chivvis is one of the producers of the podcast and was interviewed by Miranda Sawyer in The Observer:

"You were obsessive about trying to find the truth. And yet during the series a former detective told Sarah that, in some cases, detectives just want to build a case, not find the truth. Did you find that shocking?
We were completely shocked, this concept of “bad evidence” shocked the pants off us. That detectives might not go and learn some truth in case it didn’t support their theory of the case. I get why, but as a journalist you’re like – No it’s not that I want to find facts to support my theory, I just need to find the facts that tell me what happened."
Related: Adnan Syed's lawyers file first document to challenge murder conviction.


Week notes via Bristol gone by by Goatchurch

I’m doing a lot of politics stuff running up to the general election, but I don’t feel like blogging about it much.

Actually, I’m not doing that much. What’s happening is that all the work I did ten years ago is finally being put into use today by other people, resulting in an event such as this fine rant from the LibDem MP in Bristol West:

To claim, as the website Public Whip does, that I ‘voted very strongly for selling England’s state owned forests’ is misleading in the extreme. I have never voted in favour of selling forest land – I voted against two poorly worded and hyperbolic motions submitted by the Labour Party.

I never believed I’d see the day where MPs would have to answer for the things they voted for in Parliament. Anyway, there’s that and electionleaflets.org and Francis’s Candidate CVs project gathering steam. What a difference five years of internet technology advancement and greater generational awareness can make.

Meanwhile, I was in Bristol for a few days helping a friend with some DIY, because I want to put this kind of laminate floor down in our kitchen on top of some real insulation:
aaflooring

Then I did some painting before getting relieved of my duties for spreading paint all up the paint brush handle.
aaskirts

I spent the night on the Blorenge, then flew at lunch time completely alone for two hours until I suddenly got dumped down in the bottom landing field in Abergaveny. Nobody took any notice of my death spiral down to the ground; just kept walking their dogs. I am still working hard to process the data into something meaningful — if this is possible.
hgbloreng

An invite went out for a surf on the Dee Bore on Saturday morning. I thought it might be special, being the day after a partial solar eclipse, but it was a damp squib and most of us lost the wave within a few hundred metres.
deebore

On Sunday I tried to fly on The Gyrne, but there was no wind and few thermals, so I went off on a long cycle round Lake Vyrnwy with my friend.
aacrinkly
The land-scape round these parts is ever so crinkly.


The Radial Velocity Method: Current and Future Prospects by Astrobites

To date, we have confirmed more than 1500 extrasolar planets, with over 3300 other planet candidates waiting to be confirmed. These planets have been found with different methods (see Figure 1). The two currently most successful are: the transit method and the radial velocity method. The former measures the periodic dimming of a star as an orbiting planet passes in front of it, and tends to find short-period large-radius planets. The latter works like this: as a planet orbits around its host star, the planet tugs the host star causing the star to move in its own tiny orbit. This wobble motion —which increases with increasing planet mass— can be detected as tiny shifts in the star’s spectra. We just found a planet.

That being said, in our quest to find even more exoplanets, where do we invest our time and money? Do we pick one method over another? Or do we spread our efforts, striving to advance all of them simultaneously? How do we assess how each of them is working; how do we even begin? Here it pays off to take a stand, to make some decisions on how to proceed, to set realistic and achievable goals, to define a path forward that the exoplanet community can agree to follow.

Figure 1: Currently confirmed planets (from December 2014), showing planetary masses as a function of period. To date, the radial velocity method (red), and the transit method (green), are by far the most successful planet-finding techniques. Figure 42 from the report.

Figure 1: Currently confirmed planets (as of December 2014), showing planetary masses as a function of period. To date, the radial velocity method (red), and the transit method (green), are by far the most successful planet-finding techniques. Other methods include: microlensing, imaging, transit timing variations, and orbital brightness modulation. Figure 42 from the report.

To do this effectively, and to ensure that the US exoplanet community has a plan, NASA’s Exoplanet Exploration Program (ExEP) appoints a so-called Program Analysis Group (ExoPAG). This group is responsible for coordinating community input into the development and execution of NASA’s exoplanetary goals, and serves as a forum to analyze its priorities for future exoplanetary exploration. Most of ExoPAG’s work is conducted in a number of Study Analysis Groups (SAGs). Each group focuses on one specific exoplanet topic, and is headed by some of the leading scientists in the corresponding sub-topic. These topics include: discussing future flagship missions, characterizing exoplanet atmospheres, and analyzing individual detection techniques and their future. A comprehensive list of the different SAGs is maintained here.

One of the SAGs focused their efforts on analyzing the current and future prospects of the radial velocity method. Recently, the group published an analysis report which discusses the current state-of-affairs of the radial velocity technique, and recommends future steps towards increasing its sensitivity. Today’s astrobite summarizes this report.

The questions this SAG studied can roughly be divided into three categories:

1-2: Radial velocity detection sensitivity is primarily limited by two categories of systematic effects. First, by long-term instrument stability, and second, by astrophysical sources of jitter.

3:  Finding planets with the radial velocity technique requires large amounts of observing time. We thus have to account for what telescopes are available, and how we design effective radial velocity surveys.

We won’t talk so much about the last category in this astrobite. But, let’s dive right into the former two.

Instrumentation Objectives

No instrument is perfect. All instruments have something that ultimately limits their sensitivity. We can make more sensitive measurements with a ruler if we make the tick-marks denser. Make the tick-marks too dense, and we can’t tell them apart. Our sensitivity is limited.

Astronomical instruments that measure radial velocities —called spectrographs— are, too, limited in sensitivity. Their sensitivity is to a large extent controlled by how stable they are over long periods of time. Various environmental factors —such as mechanical vibrations, thermal variations, and pressure changes— cause unwanted shifts in the stellar spectra, that can all masquerade as a radial velocity signal. Minimize such variations, and work on correcting —or calibrating out— the unwanted signals they cause, and we increase the sensitivity. Not an easy job.

Figure 2: Mass of planets detected with the radial velocity technique as a function of year detected. More planets are being found each year, hand-in-hand with increasing instrument sensitivity. Figure 43 from the report.

Figure 2: Masses of planets detected with the radial velocity technique, as a function of their discovery year. More planets are being found each year, hand-in-hand with increasing instrument sensitivity. For transiting planets the actual masses are plotted, otherwise the minimum mass is plotted. Figure 43 from the report.

Still, it can be done, and we are getting better at it. Figure 2 shows that we are finding lighter and lighter planets — hand-in-hand with increasing instrument sensitivity: we are able to detect smaller and smaller wobble motions. Current state-of-the-art spectrographs are, in the optical, sensitive down to 1m/s wobble motions, and only slightly worse (1-3m/s) in the Near Infrared. To set things in perspective, the Earth exerts a 9cm/s wobble on the Sun. Thus, to find true Earth analogs, we need instruments sensitive to a few centimeters. The authors of the report note that achieving 10-20cm/s instrument precision is realistic within a few years — some such instruments are even being developed as we speak. Further push on these next generation spectrographs is strongly recommended by the authors; they support a path towards finding Earth analogues.

Science Objectives

Having a perfect spectrograph, with perfect precision, would, however, not solve the whole problem. This is due to stellar jitter: the star itself can produce signals that can wrongly be interpreted as planets. Our ultimate sensitivity or precision is constrained by our physical understanding of the stars we observe.

Stellar jitter originates from various sources. The sources have different timescales, ranging from minutes and hours (e.g. granulation), to days and months (e.g. star spots), and even up to years (e.g. magnetic activity cycles). Figure 3 gives a good overview of the main sources of stellar jitter. Many of the sources are understood, and can be mitigated (green boxes), but other signals still pose problems (red boxes), and require more work. The blue boxes are more or less solved. We would like to see more green boxes.

Figure 3:An overview diagram of stellar jitter that affects radial velocity measurements. Note the different timescales. Green boxes denote a largely understood problem, but the red boxes require more work. Blue boxes are somewhere in between. Figure 44 from the report.

Figure 3: An overview diagram of stellar jitter that affects radial velocity measurements. Note the different timescales. Green boxes denote an understood problem, but the red boxes require significant more work. Blue boxes are somewhere in between. Figure 44 from the report.

Conclusion

The radial velocity method is one way to discover and characterize exoplanets. In this report, one of NASA’s Study Analysis Groups evaluates the current status of the method. Moreover, with input from the exoplanet community, the group discusses recommendations to move forward, to ensure that this method continues to be workhorse method in finding and characterizing exoplanets. This will involve efficiently scheduled observatories, and significant investments in technology development (see a great list of current and future spectrographs here), data analysis and in our understanding of the astrophysics behind stellar jitter. With these efforts, we make steps towards discovering and characterizing true Earth analogs.

Full Disclosure: My adviser is one of the authors of the SAG white paper report. I chose to cover it here for two reasons. First, I wanted to further your insight into this exciting subfield, and secondly, likewise, my own.


Prometheus, Pandora, and the braided F ring in motion by The Planetary Society

Cassini recently took a long, high-resolution movie of the F ring, catching a view of its ringlets, clumps, and streamers, and two potato-shaped moons, Prometheus and Pandora.


Desert Moon, Narrated by Former Astronaut Mark Kelly, Now Available Online by The Planetary Society

Desert Moon, a 35-minute documentary that tells the story of Dr. Gerard Kuiper and the dawn of planetary science, is now available online.


March 23, 2015

Talks: David Bordwell and Kristin Thompson. by Feeling Listless

Film  David Bordwell and Kristin Thompson are two of the most important film theorists of the past few decades and there are few people who've studied movies in an academic context who either haven't read their work and also can't say that work wasn't the reason why they managed to pass their course.  If it hadn't been for either of them I certainly wouldn't have been able to write my MA dissertation due to their lucid, clear and thoughtful explanations of narrative and genre and of all the books which I read during that year, it's their Film Art: An Introduction which I've kept referring back to.  They continue to write about film at their blog, which just yesterday posted about the origins of the very thing my dissertation was about.



Finding material featuring Thompson is tricky because there seem to be a few of them featured on YouTube, but I did manage to find this video essay:



... and appearances in Q&As at Ebertfest in 2011:






Bordwell was much easier to track.  For chronological ease, I've put these appearances in the relevant years.

2011

Ebertfest 2011 - Do Film Students Really Need To Know Much About Classic Films?



2012

How Motion Pictures Became the Movies 1908-1920


How Motion Pictures Became the Movies 1908-1920 from David Bordwell on Vimeo.

Constructive Editing in Robert Bresson's Pickpocket


Constructive Editing in Robert Bresson's Pickpocket from David Bordwell on Vimeo.

TIFF Keynote | Industry Dialogues 2012



Ebertfest 2012 - Wild and Weird Q&A



Ebertfest 2012 - ON DEMAND: Movies without Theate



Fantasia 2012 - David Bordwell wins Lifetime Excellence Award.



Ebertfest 2012 Q&A: CITIZEN KANE [Commentary by Roger Ebert Intro by David Bordwell]



Which is incredibly poignant from the off, I think you'll agree.  The commentary they're referring to appeared on this 2001 release of the film (along with another by Peter Bogdanovich) but is missing from all subsequent releases, replaced with a track by Ken Barnes.

2013

Ebertfest 2013 - The Ballad of Narayama introduction



Ebertfest 2013 - The Ballad of Narayama Q&A:



Mildred Pierce: Murder Twice Over


Mildred Pierce: Murder Twice Over from David Bordwell on Vimeo.

2014

CinemaScope: The Modern Miracle You See Without Glasses


CinemaScope: The Modern Miracle You See Without Glasses from David Bordwell on Vimeo.

ART of the MARTIAL ARTS FILM | David Bordwell | Higher Learning



On Master of the House:


Cheap ebook! by Charlie Stross

The US (not UK, alas) ebook edition of "The Atrocity Archives", Laundry Files book 1, is on a special promotion. It's $1.99 for the rest of March! you can find the Amazon.com Kindle edition here, or the Barnes and Noble Nook edition here, Google Play store version here, and Apple iBooks store version here.

If you're reading this on my blog you're probably already aware of my Laundry Files series; in case you came here from elsewhere, "The Atrocity Archives" is book #1 in the sequence, and the latest novel, "The Annihilation Score" (book 6) comes out in the first week of July. In fact, I just finished checking the page proofs now, so it's heading back to the typesetter agency and then on to the printers next month ...


March 22, 2015

Mars Academy by The Planetary Society

A new project—"Mars Academy"—aims to expand the cosmic horizon and offer a broader sense of opportunity for at least one group of underprivileged children in an impoverished neighborhood in Rio de Janiero, Brazil.


March 21, 2015

The curious case of #RPSFail by Simon Wardley

The Register screamed 'Another GDS cockup: Rural Payments Agency cans £154m IT system' which it then cried 'escalated to £177m' and so the media told of us a lamentable Government IT failure. 

These things happen, the private sector has a terrible record of IT project failures but fortunately a big carpet to sweep it under. The NAO is bound to investigate.

The Register told us how GDS championed "the new agile, digital approach by the Rural Payments Agency" whilst apparently "The TFA has always been opposed to the 'digital-by-default' dogma"

The Register made it plain, it "understands that the Government Digital Service was responsible for throwing out a small number of suppliers working on RPA instead and went for a 40-plus suppliers approach - focusing too much attention on the front end, and little attention to integration between front and back." 

The finger of blame pointed firmly at GDS.  So, what actually happened? How did GDS get this so wrong? Well, we won't know until the NAO investigates or GDS posts a post mortem. Everything else is speculation and El Reg loves to try and cause a bit of outrage. But since the media is in the speculation game, I thought I'd read up more and do a little bit of digging.

The first thing I noticed in the mix was Mark Ballard's article. It started with the line "Mark Grimshaw, chief executive of the Rural Payments Agency is either an imbecile or a charlatan, if Farmers Weekly is anything to go by."

What? I though this was GDS' fault?

"He's been telling the agricultural press that his agency's prototype mapping tool is a failure. That's like saying a recipe is duff because your soufflé collapsed on your first trial run."

Prototype? A £177 million prototype?

"Farmers were apparently unhappy the prototype was not working as well as a production-quality system. So Grimshaw called a press conference yesterday and announced that the sky was falling down."

Hmmm. Something doesn't seem right here.

"The odd thing was that the mapping tool had only just been released as a public beta prototype. A date hadn't even been scheduled for a live roll-out."

Hence, I decided to check on the Rural Payment System (RPS) that was being built for the Rural Payment Agency (RPA). Sure enough, the RPS had only recently been released as a beta. Also the cost is only (cough) £73.4 million. Still, an awful lot but how did it get upto £177m? I checked El Reg, it was apparently a "source".

One eye opener when checking on the Government site was that RPA would "help prevent fines (‘disallowance’) for making payments that don’t comply with CAP rules (~£600m since 2005)"

Really? How comes we've been paying through the nose for compliance failures? A quick search and I stumble upon the fact that the RPS is the second incarnation of a system. The first incarnation (SPS) was so shockingly poor that in 2009 NAO urged DEFRA agency to replace the £350m system even though it was only 4 years old

£350m? … Oh but it gets worse.

In 2009, according to the NAO then along with a £350m system, we had incurred an additional £304m administration cost and £280 million for disallowance and penalties and £43m irrecoverable overpayments. The cost per transaction was £1,743 and rising (22% over 4 years). This compared to £285 per claim under the simpler Scottish system. This was 2009? Heaven's knows the cost to date.

What marvels of genius had created this 'complex software that is expensive and reliant on contractors to maintain' - why expensive consultants. In fact, 100 contractors from the system's main suppliers at an average cost to taxpayers of £200,000 in 2008/2009. The SPS is so complex and "cumbersome" because of "customisation which includes changes to Oracle's source code"

Now, £73m is a bad loss but nowhere near as bad as £650m (system + administration) for the previous system with two main contractors. El Reg's wisdom of keeping it down to a few suppliers has just gone up in smoke. However, just because the past was a debacle doesn't mean the future has to be. This was one of UK Gov's exemplars and GDS had been pretty upbeat about it.

A bit more digging and I come across Bryan Glicks article. Of note - 

'The Government Digital Service (GDS) introduced new controls over IT projects, designed to avoid big, costly developments depending on contracts with large suppliers. When Defra/RPA went to GDS with its initial proposal – a 300-page business case, according to one source – it was quickly knocked back.'

'Instead of a few big suppliers – , for example – RPA would be agile and user-led, with multiple small- and medium-sized suppliers.'

This sounds all very sensible. But still it went wrong … even if far less money was lost. Now the project sounds complex, according to the article there are - "multiple products involved that need integrating – more than 100, according to one insider".

Two suppliers seemed to be called out - one in the article Kainos and one in the comments Abaco along with the line 'this Italian company have not delivered most of the work on time and is a factor in the whole project being delayed'.

Interesting ... who are they? are they really involved? what is the role of any delays? So, more digging.

Kainos is the sole delivery partner for the original mapping prototype which was comprehensively praised by Defra. It's an agile development house, regarded by the Sunday Times as one of the top 100 places to work and seems to have a pretty strong pedigree.

Abaco provide SITI AGRI -  first thing I noticed, which caused my heart to sink was 'Oracle'. Don't tell me we're customising again?  It mentions "open" but I can find no record of their involvement in the open source world. It talks about SOA and even provides a high level diagram (see picture) and attractive web shots.


But are they involved or is this a red herring? I dig and discover a DEFRA document from 29/09/2014 which does in fact talk about SITI AGRI's spatial rules engine and use in the RPA.

So why do I mention this? Well Bryan had also noted :-

'even with very few users, back-end servers would quickly reach 100% utilisation and “fall over”'

'core of the problem was identified as the interface between the mapping portal and the back-end rules engine software'

OK, so we can now take a guess that part of the RPA solution involved the graphical Kainos mapping solution providing the front end with a connection to the services layer of the Abaco 'Oracle' based spatial rules engine system.  This sent alarm bells ringing. 

Why? 

Well, Abaco had a mapping tool and it claims to be web based - it even provides good looking screen shots. If this is the case, why not use it?

I'm clutching at straws here and this is into wild speculation based upon past experience but being able to provide a system through a browser doesn't mean it's designed to scale for web use, especially not 100,000+ farmers. Systems can easily be designed based upon internal consumption. There is a specific scenario called 'Lipstick on a Pig' where someone tries to add a digital front end to a back end not designed to cope with scale. It's usually a horror story.

Could this be what happened? A digital front end designed for scale attached to a back end that wasn't? That might explain the 'interface' and 'utilisation' rates and given an Oracle back end then I could easily believe the license fees and costs would be high.  However, it doesn't ring true and that's the problem with such speculation. Context.

GDS is full of highly experienced engineers. I can't see them commissioning a back end system that doesn't scale and trying to simply bolt on a digital front end. They would know that internal web based systems are rarely capable of scaling to public usage. This would have have had red flags all over it. 

Something else must have happened. As to what, we'll have to wait to find out.

Bryan also noted "Contrary to some reports, the £155m RPA system has not been scrapped entirely. According to the RPA 80% of farmers have already registered using existing online". This raises the question of what can be saved for future use? What has actually been lost? What is the cost?

At best we know that there was an original monumental IT disaster caused by customising an Oracle system with expensive consultants. It cost over £650m and had mounting fees. Certainly some lessons seem to have been learned by breaking the project into small components and using a broad supply base.

An approach of prototype and testing early with feedback seems to have been used but there is an issue about how this has been handled by RPA. If Ballard is correct then it might be worth explaining to Chief Execs of departments that if a prototype is still undergoing development then a) don't invite everyone b) don't run around claiming the sky has fallen on a prototype c) do communicate clearly.

However, there is no denying that something has obviously gone wrong with the current approach, though it's unclear how much of the £73m has been lost. However there are some questions I'd like to see asked.

Questions
1) What is the the actual cost spent and where did this go - license fees, consultants, other?
2) How much of what has been developed can be recovered for future or other use including other projects?
3) Was this system broken down into small components?
4) Was the interface between mapping system and the rules engine the cause of failure?
5) Is SITI AGRI the rules engine? 
** a) Were there delays as claimed?
** b) Was the rules engine designed for web scale?  
** d) If it wasn't designed for web scale then why did GDS commission the system for use on the web?
6) Did they adopt an open source route? Was there an open source alternative?
7) Who pulled the plug and why? 
** a) Should the plug have been pulled earlier?
8) Was this a case of 'Lipstick on a Pig?'

Of course, this is all speculation based upon a little digging and reading my own past experience of disasters into the current affair. What the actual story is we will have to wait to find out.

Since I'm in the mood for wild speculation, I'll also guess that despite the headline grabbing bylines - the Register will pretty much get everything wrong in terms of cost, the cause (number of suppliers) and its tirade.


TED minus x equals ... by Feeling Listless

People Steven Levy writes for Medium about attending this year's TED. For fans like me, it's almost like a film festival review, previewing upcoming attractions on their YouTube channel:

"Sitting through them makes your brain feel like a mushy piñata, whacked by one mind-blowing idea after another. Did you know that babies use sophisticated data analysis to guide the way they use squeeze toys? Meet the Frank Gehry of the rain forest, who creates the bamboo edifices in Bali. Believe it or not, when adulterers say to their betrayed spouses It’s not about you, they’re telling the truth. Oh, and here’s a guy who landed a spaceship on an asteroid."
The Monica Lewinsky talk has just been posted:


A short note to myself by Dan Catt

To make sure this good old blog code is still working.

After far too much wrangling with custom blog code I think I'm nearly there, at least on the being able to upload new posts.

The longer version is this...

I'm trying to own my own content, images, posts, audio and video. To do that I'm basically writing posts as flat markdown files into a directory structure that's reflected in the URL structure /year/month/day.

Then a whole bunch of node code converts the markdown files into .html files and this all runs as a static website. All that was fine but at the same time it rebuilt the whole site each time, in turn making FTPing up all the files a PITA.

The new code only commits files to the file system if they are new or have changed. This is turn creates a list of files that have changed in a .txt file, which is fed into the tar command. The last step was for the script to upload the compressed tar file and uncompress it.

Thus giving me a single script to build and upload just the changes to the site.

Because...

I want to bolt on an easy way for me to manage my images, which again involves just dropping them into the right folders and the script can deal with resizing them to all the sizes I need.

I think I'm rebuilding my own old version of flickr.

Of course.

Oh, and also so that I can also output the whole lot in gopher format too for the gopher server I'm getting up and running.


The Pen by Dan Catt

The Pen

I've been asked about my pen (for reals) a couple of times, so I thought I'd write a blog post about it. It's a Tombow Zoom 707 Ballpoint Pen (amazon UK/US), it cost £28 and I bought it for myself as a Christmas present.

I keep two Field Notes notebooks in my pocket, at night I take them out and put them on the bedside table. My life is dense, not hectic, not crazy busy, just every moment is filled. We have three kids, we home educate, the start-up I'm involved in is blowing up, I try to swim, I try to run, I'm learning the bass, I try and put together a podcast that takes an age, sometimes I even try to write a blog post or two. In all of that there's hardly any time to do other stuff, although that doesn't stop me thinking about other stuff. That other stuff goes down in one of the two notebooks.

The Pen

When I think of something I often can't get to a laptop or my phone in time, I tried, the thoughts don't stay in my head long enough to survive the gauntlet of children asking me things on the way upstairs. If you've watched the film Memento it's like that scene where he's looking around for a pen to write the thing down before he forgets it. I decided I needed notebooks and a pen with me at all times.

I think it's the most I've ever spent on a pen.

Before this I used the Field Notes pen that came with the notebooks. It's a good pen, feels nice to hold, flows well but the clip doesn't clip it in my pocket properly. I can't slide it into my jeans without having to put a fingernail round the back of the clip to make sure it clips properly. When I sit down the pen didn't stay in the same place.

It was all kinds of wrong.

The Zoom 707 slides into the pocket right next to the seam, and better still it stays there, after all I didn't want to lose a £28 pen. For the next few days I'd reach down and feel for the red ball on the clip, to know it was still there.

The Pen

Now it's a reflex action, I'll brush my hand past the side seam of my jeans and feel if the pen's clip is still there. When I feel it I know I can't forget anything, life is speeding on but in that one moment I know I haven't left anything behind. If I need to remember something it's in the notebook, if it's in the notebook I don't need to remember it. I can clear my mind and move onto the next thing.

When I stop to take a moment, I can touch the red ball feel it against my fingertips and the memory of the last thing I wrote comes back to me. It's a shortcut to having to open the notebook and read it back.

It's a memory machine, a meditation device and an anchor.

The Pen

All this is fine and good, but how does it write?

It was all scratchy when I first used it, I thought I'd made a terrible mistake. Other pens I'd had took a word or two for the ink to get going, with this I wrote a page and it still didn't seem right. I just needed to give it more time, after a while of being in my pocket and being used the ink started flowing properly.

I was used to the thick slick blackness of the Field Notes pen, the Zoom is light and narrow. My handwriting is spidery and the pen suits it well.

It turns out, importantly, that I can now write more words per line, more thoughts per page with the new pen. It's a little thing but these notebooks aren't big so utilising more of the page is a good thing. I wasn't expecting it, so it's a bonus.

It writes fine and good.

That is the story of my new pen.


Wreaking Havoc with a Stellar Fly-By by Astrobites

Words

Illustration of the RW Aurigae system containing two stars (dubbed A and B) and a disk around Star A. The schematic labels the angles needed to define the system’s geometry and angular momenta. Fig. 1 from the paper.

Physical models in astronomy are generally as simple as possible. On the one hand, you don’t want to oversimplify reality. On the other hand, you don’t want to throw in more parameters than could ever be constrained from observations. But some cases deviate just enough from a basic textbook case to be really interesting, like the subject of today’s paper: a pair of stars called RW Aurigae that features a long arm of gas and dust wrapped around one star.

You can’t model RW Aurigae as a single star with a disk of material around it, because there is a second star. And you can’t model it as a regular old binary system either, because there are interactions between the stars and the asymmetric circumstellar disk. The authors of today’s paper create a comprehensive smooth particle hydrodynamic model that considers many different observations of RW Aurigae. They consider the system’s shape, size, motion, composition, and geometry, and they conduct simulated observations of the model for comparison with real observations.

A tidal encounter

words

Simulated particles in motion in RW Aurigae. This view of the smooth particle hydrodynamic model has each particle color-coded by its velocity toward or away from the observer (the color bar is in km/s). Star A is the one in front with a large tidally disrupted disk. Fig. 2 from the paper.

The best-fitting model of RW Aurigae matches observations of many different aspects as observed today, including particle motions. Because the model is like a movie you can play backward or forward in time, the authors are able to show that the current long arm of gas most likely came from a tidal disruption. This was long suspected to be the case, based on appearance alone, but this paper’s detailed model shows that the physics checks out with our intuition.

What exactly is a tidal disruption? Well, in this case, over the last 600 years or so (a remarkably short time in astronomy!), star B passed close enough to the disk around star A to tear it apart with gravity. Because some parts of the disk were significantly closer to star B than others, they felt different amounts of gravitational force. As time went on, this changed the orbits of individual particles in the disk and caused the overall shape to change. This is the same effect that creates tides on Earth: opposite sides of the Earth are closer or farther from the Moon’s gravity at different times, which causes Earth’s oceans on the far side from the Moon to feel less force than the water closer to it. The figure above shows present-day motions of simulated particles in RW Aurigae that resulted from the tidal encounter. The figure below shows snapshots from a movie of the hydrodynamic model, from about 600 years in the past to today. Instead of representing motion, the brighter regions represent more particles (higher density).

words

Snapshots of the RW Aurigae model over a 600-year time span as viewed from Earth. From top left to right bottom the times are -589, -398, -207, and 0 years from today, respectively. Brighter colors indicates higher density. There is a stream of material linking stars A and B in the bottom right panel, but it is not visible here due to the density contrast chosen. Fig. 4 from the paper.

Simulating observations

Observational astronomers are always constrained to viewing the cosmos from one angle. We don’t get to choose how systems like RW Aurigae are oriented on the sky. But models let us change our viewing angle and better understand the full 3D physical picture. The figure below shows the disk around star A if we could view it from above in the present. As before, brighter areas have more particles. Simulated observations, such as measuring the size of the disk in the figure below, agree well with actual observations of RW Aurigae.

dai_fig9

Top-down view of the present-day disk in star A from the RW Aurigae model. The size of the model disk agrees with estimates from observations, and the disk has clearly become eccentric after its tidal encounter with star B. Fig. 9 from the paper.

The final mystery the authors of today’s paper explore is a dimming that happened during observations of RW Aurigae in 2010/2011. The model suggests this dimming was likely caused by the stream of material between stars A and B passing in front of star A from our line of sight. However, since the disk and related material are clumpy and still changing shape, they make no predictions about specific future dimming events. Interestingly, another recent astro-ph paper by Petrov et al. report another deep dimming in 2014. They suggest it may arise from dust grains close to star A being “stirred up” from strong stellar winds and moving into our line of sight.

Combining models and observations like today’s paper does is an incredibly useful technique to learn about how structures of all sizes form in the Universe. Tides affect everything from Earth, to stars, to galaxies. This is one of the first cases we’ve seen of a protoplanetary disk having a tidal encounter. The Universe is a messy place, and understanding dynamic interactions like RW Aurigae’s is an important step toward a clearer picture of how stars, planets, and galaxies form and evolve.


The Mapping of Pluto Begins Today by The Planetary Society

When New Horizons flies past Pluto in July, we will see a new, alien landscape in stark detail. At that point, we will have a lot to talk about. The only way we can talk about it is if those features, whatever they turn out to be, have names.


Report: NASA May Be Hard-Pressed to Launch SLS by November 2018 by The Planetary Society

A report released by NASA’s Office of Inspector General warns that the agency may be hard-pressed to have its Kennedy Space Center launch facilities ready by November 2018.


Announcing: Planetary TV! by The Planetary Society

Planetary Society Media Producer Merc Boyan presents our new video resource.


March 20, 2015

I can't paint. by Feeling Listless

Art Tomorrow morning Cass Arts opens a Liverpool outpost on School Lane near The Bluecoat in Liverpool. Full details here. It's in the newly vacated old Cath Kidston shop.

As part of the PR process, they were nice enough to send me some art supplies for me to try out the sort of thing they'll be selling.


Suitably prodded I decided to have another go.  I say another go because it's during art at school it became abundantly clear I can't paint, I don't have any artistic DNA in my genome.  Much of my GCSE was spent copying out Disney characters and drawing terrible renditions of chimney boxes and my single (non General Studies) A-Level, which happens to be in Art, was gained due to a sympathetic teacher who allowed me to create collages for two years.  Needless to say, nothing has changed.


Still. I did watch this instructional video on YouTube where a man paints a tree and had some idea about light and shading but in the end, as you can see I still did the collage, except in paint.  Is it the sky?  Is it the sea?  Is it a bit of both?  Who can tell?  I don't have the patience now to learn how to do this effectively or indeed the time.  So for all my readiness to criticise the art of others, it's always with a sympathetic eye.  At least they can do this.


Update by Charlie Stross

I've been quiet due to (a) recovering from delivering the hopefully-final draft of "Dark State" (the first book in the "Empire Games" trilogy, due from Tor next April), (b) visiting relatives, (c) having a nasty head-cold, and (d) having the page proofs of "The Annihilation Score" (July's Laundry Files novel) land on my desk. Normal service will, as they say, resume as soon as possible.

My current plan is to tackle the aforementioned page proofs, work on the next book, then head for Dysprosium, the British eastercon, over the Easter bank holiday weekend. And before I go I really ought to fit in time to catch up with the last Jim Butcher book that I haven't read yet, because he's one of the two guests of honour at Dysprosium and I'm on program to interview him. (If you've been reading the Laundry Files you might have noticed a tip of the hat in his general direction.)

Finally, here is an extremely dangerous toy (probably illegal in all sane jurisdictions).


Not Too Late: Obama Should Shut Down Gitmo by Albert Wenger

I was going to write about something else this morning but then I happened upon a video of President Obama saying yesterday that he should have closed Guantanamo Bay on day 1 of his presidency

I have written here on Continuations in 2010 and in 2013 that I wish we had the courage to close down Guantanamo Bay

It’s good to at least see the President Obama acknowledge this as an error. But he is still in office and he should do this using executive power. After all, he did sign an executive order authorizing the indefinite detention of terrorism suspects (in 2011) which is now a key ingredient in maintaining Gitmo together with the National Defense Authorization act of 2012 which does the same, despite having been partially blocked by a judge.

So, I will call and write to the White House asking President Obama to use executive power to close down Guantanamo Bay and stop indefinite detention more broadly. Please take a minute to do the same.


On and On They Spin by Astrobites

Title: On the Nature of Rapidly Rotating Single Evolved Stars

Authors: R. Rodrigues da Silva, B. L. Canto Martins, and J. R. De Medeiros

First Author’s Institution: Department of Theoretical and Experimental Physics, Federal University of Rio Grande do Norte

Status of paper: Published in ApJ

Nothing sits still in our Universe. Everything is always on the move. Like planets, stars rotate. Really, they are the most obsessive ballet dancers, perpetually doing spins (or fouetté, if you will) until they die. The authors of this paper found certain types of stars unexpectedly display rapid rotations when they are not supposed to.

Astronomers like — really like — to categorize things. Stars are categorized according to their spectral features (ie, the presence of certain elements in the spectrum) and can be one of the following spectral types: O, B, A, F, G, K, or M (“Oh Be A Fine Girl Kiss Me”).  Temperature decreases from spectral type O to spectral type M, with O stars being the hottest and M stars being the coolest. However, because stars of the same spectral type can have widely different luminosities (and so different radii by the Stefan-Boltzmann Law, which relates luminosity, radius, and surface temperature of a star), a second classification by luminosity is added, where stars are assigned Roman numerals I-IV. The paper today focuses on evolved stars of spectral type G and K and luminosity class IV (subgiants), III (normal giants), II (bright giants), and Ib (supergiants). Supergiants are the brightest and largest, followed by bright giants, normal giants, and finally subgiants.

Humans wind down over the years, and stars do too. Stars spin down as they age, induced by loss of angular momentum through outflows of gas particles ejected from stellar atmospheres, also known as stellar winds. Therefore we expect evolved stars to spin slower than young stars. Evolved G- and K-stars are known to be slow-rotators (rotating at a few km/s), with rotation decreasing gradually from early-G stars to late-K stars. However, as it always the case in astronomy, there are always counter examples. As far back as four decades ago, astronomers found rapidly rotating G and K giant stars (luminosity class III) spinning as fast as 80 km/s.  How and why these stars are able to spin this fast is still a puzzle, with theories ranging from coalescing binary stars, sudden dredge-up of angular momentum from the stellar interiors, and engulfment of hot Jupiters (Jupiter-sized exoplanets that orbit very close to their parent stars, hence the name “hot Jupiters”) by giant stars causing a spin-up.

Using a set of criteria, the authors of this paper hunted for single rapidly-rotating G- and K-stars in the Bright Star Catalog and catalog of G- and K- stars compiled by Egret (1980). Out of 2010 stars, they uncovered a total of 30 new rapidly-rotating stars among subgiants, giants, bright giants, and supergiants. To date, rapid rotators have only been found among giant stars; this work reports for the first time the presence of such rapid rotators among subgiants, bright giants, and supergiants. In fact, these objects make up more than half of the number of rapid rotators in their sample. Figure 1 shows the velocities along line of the sight (v sin i) versus effective temperatures for their sample of evolved rapid rotators, compared with G and K binaries (ie, binary star systems consisting of G- and K-stars). The similarity between the two populations implies a similar synchronization mechanism between the rotation of single evolved stars and orbital motion of the binary systems.  That interesting relation aside, the main point to note from the plot is the large observed velocities of the rapid rotators compared to the mean rotational velocities of G- and K-stars.

Fig1

FIG 1 – Projected rotational velocity along line of sight, v sin i, vs. effective temperature Teff for rapidly rotating single G- and K-stars (filled circles) and rapidly rotating G and K binary systems (open circles). The rectangular zones at the bottom of the figure represent the mean rotational velocities for G- and K-stars that are subgiants (solid line), normal giants (dashed line), and supergiants (dashed–dotted line).

 

The rapidly-rotating stars are analyzed for far-IR excess emission, which may indicate the presence of warm dust surrounding the stars (warm dust emits radiation in the mid- to far-IR regime). Looking at figure 2, a trend of far-IR excess emission is clearly seen for almost all of the 23 stars they analyzed. The origin of dust close to to these stars are not well understood; some attributed it to stellar winds driven by magnetic activity, while others hypothesized that it comes from collisions of planetary companions around these stars. In any case, any theory that tries to explain the nature of rotation in these single systems needs to account for the presence of warm dust.

fig1

FIG 2 – Far-IR colors for 23 G and K single and evolved rapid rotators. The left plot is V-[12] color, where V refers to optical V-band and [12] refers to IRAS‘ 12 µ band) while the right plot is V-[25], where [25] is IRAS’ 25 µ band. The rapid rotators are the red points while dashed, solid, and dotted lines are far-IR colors for normal-behaving G-and K-stars compiled by three different studies. The large offsets of the red points from the lines are evidence of far-IR excess emission.

 

The authors proposed that the coalescence scenario between a star and a low-mass stellar or a substellar companion (ie, a brown dwarf) or the tidal interaction in planetary systems with hot Jupiters to be plausible scenarios that can explain singly rapidly-rotating evolved stars. Because each scenario should produce different chemical abundances, the authors suggested searching for changes in specific abundance ratios, such as the relative enhancement of refractory over volatile elements, in these stars to differentiate between the various possible scenarios above.

Spinning stars are cool. Even cooler are rapidly rotating giant-like stars that spin of the clutches of theoretical predictions. While in the past rapid rotators among evolved giant stars can be explained away using small-number statistics, the authors of this paper added an order of magnitude more items to the list, forcing stellar astrophysicists to come face-to-face with the question of the nature of these rapid rotations.

 


LPSC 2015: First results from Dawn at Ceres: provisional place names and possible plumes by The Planetary Society

Three talks on Tuesday at the Lunar and Planetary Science Conference concerned the first results from Dawn at Ceres. Chris Russell showed a map of "quads" with provisional names on Ceres, Andreas Nathues showed that Ceres' bright spot might be an area of plume-like activity, and Francesca Zambon showed color and temperature variations across the dwarf planet.


LPSC 2015: "Bloggers, please do not blog about this talk." by The Planetary Society

One presenter at the Lunar and Planetary Science Conference asked the audience not to blog about his talk because of the embargo policy of Science and Nature. I show how this results from an incorrect interpretation of those policies. TL;DR: media reports on conference presentations do not violate Science and Nature embargo policies. Let people Tweet!


Launch Pad Animals, Ranked by The Planetary Society

A semi-authoritative ranking of creatures that co-inhabit rocket launch sites around the world.


Slides from the LPSC 2015 Session on the Community Response to NASA's Budget Request by The Planetary Society

The Planetary Society helped organize a community response to the latest NASA budget at the 2015 meeting of the Lunar and Planetary Science Conference.


March 19, 2015

Soup Safari #17: Rosemary and Chestnut at East Avenue Bakehouse. by Feeling Listless







Lunch. £4.25. East Avenue Bakehouse. 112 Bold Street, Liverpool, Merseyside L1 4HY. Phone:0151 708 621. Website.


The Underwater Menace DVD. Now available to pre-order? by Feeling Listless

TV File this under "unlikely Amazon purchases":



A friend on Twitter noticed this pre-order page for the looong delayed release of Doctor Who's The Underwater Menace, the only surviving episodes of Doctor Who still awaiting release (that we know of). So of course I put a pre-order in even though I don't really expect it to be posted to me by the 7th April and not at that price, which is back to the funny numbers for single stories of the original VHS releases (£39.99 for Revenge of the Cybermen? Yes, please!). Here are the problems:

(1) Kasterborous has a statement from the BBC which says its been removed from the schedule.

(2) The BBFC website doesn't list a recent classification. The episode three assessments are from over a decade ago when it appeared in the Lost in Time boxed set and before that The Ice Warrior cardboard box.

(3) Gallifrey Base is very ¯\_(ツ)_/¯

(4) It's not available at the BBC Shop.

So it's probably a database error.  But nothing the world could stop me from pre-ordering.  Just in case.


Hack the Environment(al Data) by Albert Wenger

I have long been writing about climate change on Continuations. Mostly it has been about more evidence with the occasional call to action. Today is such a call if you live in London: Our portfolio companies AMEE and Twilio together with IBM, CapGemini and others are organizing an Environmental Data Hack Weekend this coming weekend.

Climate change is showing up in data everywhere. But many of the data sets are dispersed and not easily accessible. The Environmental Data Exchange was built by Digital Catapult and AMEE to bring together disparate datasets and make them more easily accessible. The goal of the Data Hack Weekend is to test out this new platform and build interesting applications on top of it.

I believe that as more people are exposed to the data that is already available and more data becomes available it will continue to raise awareness of how grave the situation already is. One example here in the US is the ongoing California drought that is turning into a major water emergency.

So, dear friends in London who care about this: go and join the Environmental Data Hack Weekend


A Possible Detection of Dark Matter in a Dwarf Galaxy! by Astrobites

Dark matter has been observed only “indirectly”

Most readers of this blog have likely heard of dark matter before; you may even take it for granted that it exists. A huge body of astronomical research exists offering conclusive evidence for some kind of mass in the universe which a) exerts gravitational force on regular matter, b) doesn’t emit light,  and c) seems to be 5 times more abundant than regular matter. Evidence for this extra mass comes from gravitational effects, including

These observations indicate that there is much more mass in the universe than we can see with light.

Fig. 1 - An image of the Bullet Cluster, showing galaxies, hot cluster gas (red), and total mass from gravitational lensing (purple). The strong detection of additional mass from lensing is a major indicator of dark matter. Image C/O NASA/CXC/STScI

Fig. 1 – An image of the Bullet Cluster, which is believed to be made of two smaller galaxy clusters which are in the process of merging. This image shows galaxies, hot cluster gas (red), and total mass from gravitational lensing (purple). What’s most interesting is that the centroids of the total mass (representing the dark matter) is offset from the gas. When the clusters collided, the gas’ viscosity kept it stuck together, while the dark matter continued through because it only interacts via gravity. Image C/O NASA/CXC/STScI

But these gravitational clues are only “indirect evidence” for dark matter’s existence. It is like seeing a dancing marionette puppet (the gravitational effects), and inferring there must be strings (dark matter) to make it move.

Some models (such as Modified Newtonian Dynamics) try to explain these gravitational effects through a subtle change to General Relativity. These changes could make gravity stronger in certain places without requiring more mass. This explanation is like saying the puppet is held up by magnets: it could explain the puppet’s movement just as well as strings could. To prove dark matter really exists, we must actually see the strings, not just the puppet. Today’s paper offers some of the best evidence yet!

Where to look for “direct” evidence of dark matter

In order to conclusively prove the existence of dark matter, astronomers and physicists alike want so-called “direct evidence”: a unique measurement of a new particle, using some property other than gravity. Astrobites has previously covered many examples of ways to directly detect dark matter (assuming the particles are something like a WIMP). The most direct detection method is with particle detectors buried underground. Another direct method is to detect gamma-rays or radio waves emitted when two dark matter particles collide (Fig. 2).

Fig. 2 - Dark matter particles probably don't produce photons on their own. However, if they interact via the weak nuclear force, they could create a particle-antiparticle pair when they collide. These byproducts can produce photons when they collide, resulting in observable traces of dark matter. From Baltz et al. 2008.

Fig. 2 – Dark matter particles probably don’t produce photons on their own. However, if they interact via the weak nuclear force, they could create a particle-antiparticle pair when they collide. These byproducts can produce gamma-rays, neutrinos, etc., resulting in observable traces of dark matter. From Baltz et al. 2008.

Because dark matter particles don’t interact with light, they shouldn’t produce photons on their own when they collide. However, some models predict they could create a particle-antiparticle pair, like a tauon and an antitauon. These byproducts of the dark matter collision can then collide again, resulting in gamma-rays. They could also spiral around galactic magnetic fields, producing radio waves. This method is less direct than actually seeing a collision in a particle detector. But, it would be much more conclusive evidence for dark matter than the gravitational effects alone.

The best place to look for the emission from colliding dark matter particles is where dark matter is thought to be the most dense: the centers of galaxies and clusters. Observers claim to be very close to detecting gamma-rays coming from dark matter at the center of the Milky Way (see this Astrobite). But while large galaxies like the Milky Way should have a lot of dark matter, they also have a lot of background gamma-ray sources! Pulsars, supernova remnants, and our supermassive black hole all emit gamma rays like crazy, and are all clustered right near the center of the Milky Way. So even a strong dark matter signal could be tough to distinguish from the regular gamma-ray emission from the galaxy.

A better place to look for dark matter signatures is in dwarf galaxies. They may have less dark matter overall, but also have the benefits that:

  • Dwarf galaxies are very common (there are more than 50 around the Milky Way and Andromeda)
  • Many are located at the galactic poles, away from the gamma ray emission of the Milky Way
  • They have many fewer internal gamma-ray sources, like pulsars and supernova remnants

Therefore, dwarf galaxies are thought to offer one of the best ways of detecting dark matter, if it actually emits gamma-rays at all.

A detection of gamma-rays from a dwarf galaxy

Previous attempts to observe gamma-rays coming from dwarf galaxies have been marginally unsuccessful (see this Astrobite). The authors of today’s paper analyzed archival measurements from the Fermi-LAT gamma-ray space telescope, looking for emission from the dwarf galaxy Reticulum 2. They estimate the expected background in two ways: one from the Fermi group’s model of diffuse emission in that region, and one from actual measurements, averaged in the regions around Reticulum 2.

Fig. 2 - The gamma-ray spectrum around Reticulum 2 is shown in red points. Two different background estimates are shown as the solid black line and grey triangles. The excess of emission around 2-10 GeV could be the first signs of direct evidence for dark matter. Fig. 1 from Geringer-Sameth et. al 2015.

Fig. 3 – The gamma-ray spectrum around Reticulum 2 is shown in red points. Two different background estimates are shown as the solid black line and grey triangles. The excess of emission around 2-10 GeV could be the first signs of direct evidence for dark matter. Fig. 1 from Geringer-Sameth et. al 2015.

In an exciting breakthrough, the authors find more gamma-ray flux than expected from either background model! This emission is observed to come from gamma-rays between 2–10 GeV (Fig. 3). If this emission comes from dark matter collisions — rather than some other source that hasn’t been identified yet — this energy range could help constrain the mass of the dark matter particles themselves.

The authors claim this to be a significant detection of dark matter. Values as high as 4σ are tossed around, suggesting less than a one-in-ten-thousand chance that this is just noise. This exciting result has been getting lots of press attention over the past week (such as in this Brown University press release, this Discovery article, and even the New York Times). However, these significance levels are dependent on the models chosen for both background emission and for the dark matter particle physics.

In this Astrobiter’s opinion, the most that can be confirmed is the detection of an excess of gamma-rays from Reticulum 2. The authors themselves stress that there is a lot more work ahead to confirm this emission indeed comes from dark matter. More observations of this galaxy in gamma-rays are needed, to improve the quality of the signal. The Fermi group is soon releasing a new data reduction algorithm, which will improve the resolution of the observations already completed. Perhaps even more importantly will be follow-up in other bands, like in optical or X-rays. These can identify whether another source which could produce gamma-rays (like a pulsar or a young, massive star) is nearby to this galaxy. Dark matter has not yet been conclusively detected, but these observations indicate we might be very close. It is unquestionable that this object (and other dwarf galaxies like it) will be getting a lot of attention in the years to come.


LPSC 2015: Philae at comet Churyumov-Gerasimenko by The Planetary Society

In my first post from the 2015 Lunar and Planetary Science Conference, I discuss the latest work on Philae images, and some cometary polymers.


How Do We Know When We Have Collected a Sample of Bennu? by The Planetary Society

A huge amount of effort goes into deciding where to try to collect a sample on Bennu. There are roughly nine months to survey, map and model the asteroid to help make this decision.


March 18, 2015

My Favourite film of 2004. by Feeling Listless



Film Irrespective of Richard Linklater's achievements, Before Sunset is a reminder of how films have forced a dedication in me which I should probably usefully apply to other things. Glancing through the review posted when the film was released, I'm reminded that because the film didn't turn up straight away at FACT in Liverpool, I travelled out to the Cornerhouse in Manchester after work, nearly an hour by train there back again (this was when I was at Liverpool Direct storing up the money to go to university, or not depending on whether my application was accepted) (oddly enough moving to Paris was plan-B and at this point I'm not sure that wouldn't have been the better option) (but I digress).  This wasn't an unusual trip.  Years before I'd done the same with a friend for a screening of Singing in the Rain (and Hamlet even earlier) and would later kill myself to get to a screening of the first episode of Torchwood (not literally clearly though given what happened with the rest of that series it might have been preferable too).

With Netflix and rentals-by-post and the price of tickets, would I do this again now?  Probably.  I saw the sequel, Before Midnight, in the same screen at the same cinema in the same seat nine years later at the end of a day spent shopping in Manchester though it's fair to say I probably went to city on that week, on that day because Before Midnight was playing.  That would also describe my approach to Cedric Klapisch's Chinese Puzzle.  What Netflix lacks and by-post only just manage to retain are the sense of anticipation and the ritualistic aspect of going to the cinema, of buying a ticket, hoping your favourite seat is free and if you're me going to the toilet three time before the film starts because I've drunk too much coffee again.  Oh and reading the cinema's brochure advertising upcoming attractions which in the case of the Cornerhouse is often films which I know I won't be watching for another six months while I wait for them to turn up on dvd or streaming (creating a different kind of anticipation).

But much of the trip has to do with the Cornerhouse itself.  As you'll read in the coming weeks, the Hyde Park Picture House in Leeds was for a time my cavern of dreams.  After that the 051 in Liverpool.  Then FACT.  But the Cornerhouse is the Cornerhouse.  It's my favourite cinema.  For me, it is cinema.  Partly it's because living in a different city I can't take it for granted so it'll always be special.  Partly it's because in earlier years with its cushioned fabricy blue seats, basement screens of various sizes and box office across the street from its largest venue it didn't seem like any cinema I'd ever visited.  The smell.  Plus back in the day before the internet, those brochures always seemed filled with films which I'd never heard and would never see.  Even now, the Cornerhouse is the place I go when I want to see films which are important or feel like they're going to be important to me.  The idea of getting on a train just to see a film at the Cornerhouse has never seemed ludicrous because they experience of seeing a film there will be unlike seeing a film anywhere else.

Soon it'll be gone of course, replaced by a much larger, and to be fair, probably better designed venue up the road.  Presumptuously called "Home" it'll be interesting to see if it retains the mystique.  There'll be more screens, so a greater selection of films (one of the joys of the Cornerhouse was the limited selection at odd screening times which meant that sometimes I'd stumble upon a film I wasn't expecting because it's all that happened to be on when I happened to be in Manchester).  Plus a theatre.  Plus a cafe.  Plus the gallery space.  It sounds like it could still be like no other cinema, bar FACT or the BFI, I've ever visited.  But it'll also be nothing like the Cornerhouse because initially it'll be without the memories which can only be gained by visiting a building repeatedly.  That place may well be called "Home" but the Cornerhouse at the moment has a better claim to the description.  The last film I saw there, quite deliberately, was Chinese Puzzle but if it had been another installment in the life of Celine and Jesse that would have been just right too.


Pontlottyn by Goatchurch

Here I am at a climbing wall in Bristol, having balked at the price and instead sat in the cafe. They ought to have a discount rate for people don’t actually like climbing. Here’s what I enjoy instead. (I am so predictable):

pontwales

Brrr, it was cold in grimmest South Wales on a grim Saturday when all the colours are grey and the light makes everything flat. I enjoyed a sublime hour long flight above Pontlottyn, even though I didn’t go anywhere except soar up and down the ridge alone and then top land in a howling gale.

My box of tricks is in the 3D printed purple box on my left next to the airspeed indicator. I’ve not had the chance to do anything with the data except plot it and go: “that’s pretty noisy” at the accelerometer data. I have plans to extract consistent correlations, barometer vs altitude vs temperature, bar position vs wind speed, roll angle vs turning rate registered on the compass, and whether I keep diving out of turns because I don’t have enough speed and control.

Unlike my so far doomed attempts at manipulating house and fridge temperature data, this flight dynamical system is memory-free. The same temperature, pressure, windspeed, and wing angle at any time of the flight should result in exactly the same response. Deformations of the wing are brief and temporary. This is not the case for the fridge where every cycle begins with a different temperature distribution within the dense cabbagy foodstuffs and chemical pumping machinery.

Subject to instrumental noise and turbulence, all the data should be with me, and I can only hope this isn’t going to end up as just another one of my expensive failed software projects that looked plausible when I began, but then crash landed in the trees.


The Misunderstood Self Driving Car by Albert Wenger

Yesterday I saw a funny cartoon about self driving cars by Scott Adams from the perspective of a robot with an attitude:

When I discuss the impact of automation on the labor market I sometimes run into the argument that self driving cars are still years away and that there are a lot of edge cases to solve before they will replace human drivers.

That argument makes it appear as if there was some distinct moment in the future where we finally say “hurrah, the self driving car is completely finished” and that there will be no impact until then. But that is not how technology progresses.

Instead, technology tends to arrive piecemeal in fits and starts. And it has an impact all along the way already. The same is true for self driving cars. We already have some key components that have entered the market: the most impactful one has been GPS based navigation (which has now come to pretty much every car courtesy of smartphones). While this doesn’t automate the driving per se it does take over a crucial bit which is figuring out how to get from A to B. And that means that cheaper labor can now be substituted for trained labor in driving taxis and limousines!

What will be the next big step? Well one possibility is remote driving. Suppose you have a self driving car but it can’t handle certain edge cases, like getting stumped on a New York city street that’s blocked by an oil delivery truck (we’ve all been there). Instead of having to be able to solve a situation like that completely autonomously one can envision a future of driver pools that stands by whenever a self driving car needs an assist. Here technology provides leverage — a single driver might now be able to “supervise” multiple cars. And of course that driver could be in a low cost part of the country.

This has of course already happened although not yet with cars — the US military has many new drone pilots who have never logged an hour of flight in a real plane. And drones can keep themselves up in the air and takeoff and land by themselves. So even the number of pilot hours relatively to flight hours has already changed. And while I don’t know anything about military pay, in commercial aviation the impact of automation has helped drive down airline pilot pay at the 75% percentile from $150K in 1999 to $100K in 2011.

So let’s please not pretend that we shouldn’t think about labor market consequences of self driving cars because fully autonomous cars en masse are still years away. The impact of what is becoming possible will be felt all along the way.


The Magnetic Personalities Of Dead Stars by Astrobites

Title: The incidence of magnetic fields in cool DZ white dwarfs

Authors: Mark Hollands, Boris Gaensicke, Detlev Koester

First Author’s Institution: Department of Physics, University of Warwick, Coventry, CV4 7AL, UK

The galaxy is littered with white dwarfs, the burnt out remnants of stars that have run out of hydrogen fuel in their cores, but were too small to explode as supernovae. But far from being lifeless orbs, around a tenth of white dwarfs have powerful magnetic fields, a million times stronger than that of the Sun. How did these magnetic white dwarfs become such strong magnets? And just how many are there? Hollands et al. set out to answer the second of these questions, in the hope that it will shed light on the first.

By far the easiest way to find magnetic fields in white dwarfs is to look at the absorption lines in their spectra. Normally, the light absorbed by an element at a certain wavelength is seen as a single, thin line. However, if the white dwarf is magnetic, then a process called the Zeeman Effect splits the line into three. The further the outer two lines are from the central, original absorption line, the stronger the field. This offers a simple way to both find magnetic white dwarfs and to measure their field strengths.

hollandswd

Figure 4 of the paper, showing absorption lines of magnesium and sodium in one of the newly discovered magnetic white dwarfs. The three lines reveal the presence of a magnetic field—if there were no magnetics, only the central line would be present. The red line shows the model used to calculate the strength of the magnetic field.

But as white dwarfs get older, they cool down, and the helium absorption lines in their spectra disappear as the atoms fall to their ground state. This hides the magnetic field, preventing us from knowing the magnetic behaviour of white dwarfs over their entire age range. Fortunately, a large fraction of these white dwarfs also have planetary systems. Asteroids and other planetesimals often fall onto the white dwarfs, producing the absorption lines needed to reveal their ancient magnetic fields. White dwarfs like that are known as DZs, Degenerate objects that have elements with a high atomic number (Z) in their atmospheres.

The authors used the huge Sloan Digital Sky Survey (SDSS), filtering down the thousands of objects observed to find 79 DZ. Of these, 10 of them were found to be magnetic, 7 of which no one had spotted before. Their magnetic fields were huge: up to 9.59 MegaGauss (nearly 1000 Tesla). For comparison, the large magnets used in MRI machines generate around 3 Tesla.  The authors found that this implied that roughly 13 percent of DZ white dwarfs were magnetic. However Hollands et al. point out that there are many biases involved, such as the difficulty in seeing dim objects, or in spotting weak magnetic fields in spectra with low signal to noise, as well as the uncertainty over the geometry of the magnetic fields. Accounting for all of these biases could lead to an even higher occurrence rate.

The authors then compared this rate with that found by others looking at different types of white dwarf, or over different temperature ranges. What they found was surprising: Magnetic fields appeared to be much less common in other types of white dwarf, and much weaker.

With this in mind, Hollands et al. then tried to answer the question of where the magnetic fields came from. The simplest explanation is that they were there from the start, left over from when the white dwarfs were stars. The progenitors of the magnetic white dwarfs could have been the curious Ap/Bp stars, which have much stronger magnetic fields that most stars at around a few KiloGauss. As they shrunk down into white dwarfs, conservation forces would have ramped up the field to the required MegaGauss levels. Unfortunately there aren’t enough Ap/Bp stars to account for the high occurrence rate of magnetic DZ. Nor does this explain why they found many more magnetic DZ  compared to other types of white dwarf.

The authors’ next suggestion is that the magnetic fields are the result of a binary merger, where the white dwarfs combined with companion stars into single objects. During this process a magnetic dynamo would naturally be created. This would also explain why these white dwarfs seem to have a higher average mass than other white dwarfs. The problem with this hypotheses is that the authors only know that the white dwarfs are magnetic thanks to the white dwarfs’ planetary systems, which probably wouldn’t survive such a merger.  However, the authors say that it is possible that it could be a planet doing the merging, rather than a companion star. But this explanation again falls victim to the high incidence rate: models of planetary systems at white dwarfs suggest that it can’t happen enough to make all of the magnetic white dwarfs.

Thanks to these flaws in all of the potential explanations, the authors are forced to leave the question of how the magnetic white dwarfs formed unanswered. They finish by pointing out that the next set of results from the SDSS is on its way, promising to provide many more of the mysterious magnetic white dwarfs to investigate.

 

 


March 17, 2015

Talks: Dr Hannah Fry. by Feeling Listless

People  This week BBC Four broadcast Climate Change by Numbers in which three mathematicians - Dr Hannah Fry, Prof Norman Fenton and Prof David Spiegelhalter utilised a set of numbers each to explain why climate change should be the most pressing issue on the planet Earth we should all be dealing with and why we should believe the 97% of climate scientists who believe it to be due to human activity.  The whole thing is available to watch here for the next two weeks, assuming it isn't repeated in which case it'll be a bit longer.

As her biography page at UCL explains, Dr. Fry, "is a lecturer in the mathematics of cities at the Centre for Advanced Spatial Analysis (CASA). She was trained as a mathematician with a first degree in mathematics and theoretical physics, followed by a PhD in fluid dynamics. After a brief period working in aerodynamics, she returned to UCL to take up a post-doctoral position researching a relatively new area of science - social and economic complex systems. This led to her appointment as a lecturer in the field in October 2012."

Here she is herself with a richer explanation of her interests from a German science conference's channel:



Which is fascinating and for the past few years she's appeared at a range of conferences and courses demonstrating this approach to data and I've gathered as many of these talks as I can find. There's some repetition in places, but she's excellent at finding new ways of presenting similar data for different ends.

Let's begin with Fry's own channel where there are three pieces, two of which are provided for her students and (just slightly) above entry level:



Can maths predict a riot
"This video came about as a splinter project of some work I've been doing at UCL, trying to understand the 2011 London riots from a mathematical perspective. "




Linear relationships, power laws and exponentials




Logs and exps


The general UCLBASc channel has this piece explaining the course in greater detail:



Quantitative Methods



But perhaps she's best known publicly for this brilliant TED Talks (the first also available as an eBook and which as Fry's media page demonstrates gained some interest around Valentine's Day this year):



The mathematics of love
"Finding the right mate is no cakewalk — but is it even mathematically likely? In a charming talk, mathematician Hannah Fry shows patterns in how we look for love, and gives her top three tips (verified by math!) for finding that special someone."




Is life really that complex?
"Hannah Fry trained as a mathematician, and completed her PhD in fluid dynamics in early 2011. After a brief period working as an aerodynamicist in the motorsport industry, she came back to UCL to work on a major interdisciplinary project in complexity science. The project spans several departments, including Mathematics and the Centre for Advanced Spatial Analysis, and focuses on understanding global social systems -- such as Trade, Migration and Security. Hannah's research interests revolve around creating new mathematical techniques to study these systems, with recent work including studies of the London Riots and Consumer Behaviour."


In 2014 she was invited to speak at Ada Lovelace Day 2014:



Can Maths Predict the Future?
"Hannah Fry shows how maths can explain real world events. From crimes to relationships, patterns in numbers such as Benford's law on the prevalence of numbers starting with 1', help us predict the future."


She extemporised on riot prediction theory at re:publica 2014 across a whole hour:



I predict a riot!
Synopsis: "It happens again and again: peaceful protests turn violent, paving stones become missiles, the police respond with truncheons, tear gas and water cannons. People are hurt, sometimes fatally. But why were certain areas and neighbourhoods affected by the violence, while others remained completely peaceful? Why does social unrest start in the first place? How does it spread?"


She's also a regular on the Numberphile channel:






And the BBC's Britlab:





And HEADSteam:



She's also presented a Radio 4 documentary, Can Maths Combat Terrorism?

For more on Dr Hannah Fry, visit her website.


In terms of strategy, WHY is irrelevant without WHERE by Simon Wardley

Think of a map in combat or a chessboard. There are two things that both instruments tell you.

First, is the position of things or WHERE things are in relation to other things. This Knight is on this part of the board next to the King etc. The troops are on this hill and the enemy is in the pass below.

The second thing they tell you is WHERE things can move to. We can move these troops positioned on the hill, down the hill to the south but we can't move the same troops north because there is a cliff which falls into the sea etc. We can move this Knight here or that Rook there.

A Wardley Map enables you to draw a line of the present (an existing business or organisation) on a landscape of position (value chain) vs movement (evolution). See figure 1.

Figure 1 - A MAP


On the map we have an hypothetical business represented by points A to B to C in a chain of needs. We can see the relationship between components. We know all these components will evolve due to supply and demand competition. We know we can manipulate this evolution (open efforts, patents, FUD, constraints etc). We know that as components evolve they can enable higher order systems to appear.

This is what we call Situational Awareness.

Hence, from your map (even a simple example like above) you can see multiple points WHERE you might attack. Do we want to commoditise a component? Do we want to drive up the value chain into creating a higher order system?

We're going to deep dive soon on some strategic gameplay and weak signal techniques but before we do, I need to make this bit very clear. The WHY of strategy is always a relative statement as in WHY here over there? Why move the rook here over there? Why move the troops down the hill over providing suppressing fire to the pass? Without the WHEREs then WHY becomes ... hand wavy ... and usually deteriorates into how, what and when (i.e. action statements).

You can usually tell competitors who have little to no situational awareness because they talk about the need to focus on why but when you ask them why are they attacking this space over another, they often get flummoxed, a bit hand wavy and quickly move into inspirational, vision and action statements -

"We're going to make the lives of cats better everywhere. It's part of our vision to become the best supplier of cat food. That's why we're building the internet of things for kittens!"

This is always useful to know. These people are potential fodder to your cannons and easy prey. They are your soft targets, the ones to take out first. With practice, you can often use inertia to get them to self implode and fight themselves.

BUT be warned. When asked the same question you need to reply in the same vague, hand wavy way. Don't give away information. Misinform. So do a bit of digging on your competitors before you attack. Just in case they know what they're doing. Chances are, you'll be fine.

Which is the other point of mapping. Don't just map yourself, map your competitors. If a competitor doesn't understand the game, don't hesitate to help yourself to their market. It's always easier to build by taking out the easy prey before you tackle the hard targets.


Port Hyperlink. by Feeling Listless

Film When I wrote my MA Film Studies dissertation about "hyperlink" films, as you might expect I had to watch a lot of them and although the process was mostly about analysing them looking for commonalities, pretty soon I did realise that some of them were better than others. Which is why I can just about agree with Nicholas Barber in today's G2 about how some films which would have worked perfectly well as portmantau films found themselves mixed and matched in unuseful ways:

"... prefer a good honest portmanteau. It’s less tricksy than a hyperlink film. Every section has to stand or fall on its own merits, as well as complement the whole. If one segment doesn’t entertain, it risks being cut out: the other segments will survive without it. Sub-Altman hyperlink films, on the other hand, can use their constant back-and-forthing to disguise the weakness of the individual strands. You don’t get such shilly-shallying from Dr Terror’s House of Horrors, where every gruesome story packs a punch and has a twist to remember. Great title, too."
But as I researched my dissertation, one of the questions I had to deal with in relation to considering if this was a genre and what its tropes might be was what the "pleasures" might be for the audience, the repeated element that people look for. In hyperlink films this is the moment when you realise how the people are connected, that two characters you've been following are actually (sorry) siblings or married or co-workers or whatever something which often doesn't happen until deep into the film causing you to re-evaluate what you've seen before.  Todd Solondz is a master of this - Happiness being the primary example.  Oddly, he doesn't pention Paris J'Taime which brilliantly is a portmantau film, until it isn't.


VCs Are (What Google Thinks) by Albert Wenger

Last night at dinner while waiting for the food we played the “Google autocomplete game.” Here is how it goes. You go to Google and type the beginning of a search query. Then you tell the others the possible completions and they have to guess which is the most popular (you can even try to guess the whole order — this is loosely based on Family Feud and if you want a more elaborate version check out Google Feud).

So when it was Susan’s turn she typed “VCs are” and got this, which is pretty funny but also a bit revealing

image

I am not exactly sure if this list is entirely driven by the most frequent queries or takes other signal into account but in any case it does reveal something about the perceived image of venture investors (see my posts about information cascades on why this should be taken somewhat with a grain of salt).

The first one is interesting because it is right but not necessarily in the way you might expect. Most likely what people are looking for are examples of VCs doing mean things to founders. There is another interpretation though. I have encountered a number of (especially younger) entrepreneurs who have selected their investors based on whom they got along with the easiest. But that’s a shallow definition of friend. Rather a friend should be someone who will tell you when you are screwing up. With that definition of “friend” I agree that a lot of VCs are not friends because they wait too long to tell entrepreneurs that and then it is usually too late.

The second one could be fed by many things but I suspect that it primarily arises from situations in which the interests of the VC and those of entrepreneurs diverge. There are many such situations that can arise over the lifetime of a company starting with giving entrepreneurs reasons for passing on an investment and for others negotiating the price at which an investment takes place. My own approach to these situations and others (e.g., sale of a company) has been to make the conflict completely transparent and discuss it openly and thoroughly. This takes more time and may lead to somewhat different outcomes. But I find it sure beats hiding behind claims that something is “standard” or possibly even claiming it is good for the entrepreneur when it is not.

The third one frankly came as a surprise to me. Personally I could be considered a failed academic (completed my PhD but didn’t go into academia), but many VCs I know are former operators and often with great success. So this morning I figured I would look what this query actually returns and the top result is an article from Australia titled “VCs are ‘failed academics’” that as far as I can tell has nothing to do with VCs. So here then we are left with a bit of a puzzle and potentially a sign of an information cascade of the type that had Google in a defamation lawsuit by the German first lady.

The fourth one is, well, dumb, and I am sure applies to many other professions (I didn’t actually test this). For good measure then, here is what Google thinks about entrepreneurs


LightSail Featured on CBS Evening News by The Planetary Society

The Planetary Society's LightSail spacecraft made an appearance on national television Monday night during a two-minute segment by CBS Evening News.


At LPSC this year? Come to this special session on NASA's budget by The Planetary Society

For those of you who are here at LPSC 2015, we’ve organized a special session at noon on Tuesday, March 17th in the Montgomery Ballroom to bring together representatives from the three major professional organizations that represent planetary scientists to address your questions and concerns about NASA's 2016 budget request.


March 16, 2015

Continuous and Sustainable Competitive Advantage comes from Managing the Middle not the Ends by Simon Wardley

The mechanics of biological and economic systems are near identical because they're driven by the same force - competition. The terms we use to describe economic systems from evolution, punctuated equilibriums, ecosystems, red queen, cell based structures, co-evolution, componentisation, adaptive renewal cycle and so forth, all have their origins in biology. 

I'm going to use a few of these to argue why the future of continuous and sustainable competitive advantage comes from the middle not the ends of evolution.

First, we need to understand evolution. Evolution is not the same as diffusion. It describes how something evolves as opposed to how a particular instance of something diffuses (see Everett Rogers) in an ecosystem. 

Evolution is not predictable over time but it occurs over time. The best model I know of describing evolution is given in figure 1. It's based upon a pattern that was described in 2004 and primary research to determine why that pattern occurred between 2006 and 2007.

Figure 1 - Specific form of Evolution


It's the combination of supply and demand competition that causes things to evolve along a standard pathway. As things evolve their characteristics change. However, evolution doesn't just impact activities (the things we do). It also impacts practice (how we do things), data (how we record things) and even the mental models that we create (how we understand things).

The general form of evolution is given in figure 2 and in table 1, the characteristics of the different classes along with several general economic patterns are provided.

Figure 2 - General form of Evolution



Table 1 - Characteristics of General Form.


There are all sorts of subtle interaction (e.g. co-evolution of practice with activities, how to define ubiquity) but that's not the purpose of this post. What I want to concentrate on is the issue of evolutionary flow i.e. the competition that causes all things to evolve.

As a thing evolves then it enables higher order systems to appear through an effect known as componentisation (see Herbert Simon). For example, utility electricity provision enabled television, radio, electric lighting and digital computing. These news things exist in the unexplored territory, the uncharted space. They are about experimentation and exploration. They are only economically feasible because the underlying subsystems (i.e. the things they need, such as power) became more industrialised.

When this happens, visible user needs are no longer about the underlying subsystem but instead about the new higher order systems that have appeared. People only want electricity in order to power their computer, their TV, their radio and so forth. 

These higher order things are the visible user need, the underlying subsystem becomes increasingly buried, obscured and invisible. But any new higher order thing also evolves e.g. digital computing has evolved from its genesis to utility forms today. This in turn allows an even higher order of systems to be built.

When I use a cloud service, I don't care about the underlying components that the supplier might need (e.g. computing infrastructure). I care even less about the components below that (e.g. power).  I did care many years ago about things like servers and even power but today I care about what I can build with cloud services. Those underlying components are far removed from my visible user need today. Why? Because I'm competing against others who also use these services.

It is supply and demand competition that creates the evolutionary pressure which drives the novel to become the commodity, the uncharted to become the industrialised and the cycle to repeat. For activities that means genesis begets evolution begets genesis (see figure 3). For knowledge it means concept begets universally accepted theory begets concept.

Figure 3 - Genesis begets evolution begets Genesis


As an aside, this cycle has specific effects. It has three stages of peace, war and wonder due to its interaction with inertia to change. There are numerous forms of inertia and they act as what Allan Kelly described as a homomorphic force.  This cycle has a corollary in Hollings adaptive renewal cycle which is unsurprising given both are driven by competition. The war element of the cycle creates a punctuated equilibrium with the past due to network effects of competition i.e. the more people adapt the more pressure to adapt increases. The latter is known as the Red Queen Effect.

Of course, whenever you examine something it has a past, a present and a future. If I use a cloud based analytic service, I know it needs an underlying computing infrastructure, I know that computing infrastructure needs power and I know that the generators that make power need mechanical components. I also know that all these components evolved and were once novel and new. They each had their genesis. 

I've shown this constant evolution in figure 4. What's important to remember is the chain of needs (shown in a solid black line) is a line of the present. Those components evolved from the past and are evolving into the future (the dotted lines). The further you go down the chain, the more invisible the component becomes to the end user.

Figure 4 - Chain of Needs, The line of the present.


Being able to show the present on a landscape of what something is (the value chain) against how it is evolving (change i.e. past, present and future) is what mapping is all about. It's an essential activity for improving situational awareness, gameplay, operations and organisational learning.

It's key to understand that a map is a picture of the present in a dynamic landscape. All the components are evolving due to competition. That evolutionary flow affects everything. But if you know this, if you understand how evolution works, the change of characteristics, the requirement for different methodologies and common economic patterns, then you can exploit this. Figure 5 has an example map and a particular play known as Fool's mate.

Figure 5 - Map


With a map you can mitigate risks, obliterate alignment issues, improve communication and even examines flows of business revenue along with a host of other techniques. With enough maps you can remove duplication and bias in an organisation. 

You can also use maps to organise teams into cell structures and to solve the issues of aptitude and attitude i.e. we might have lots of engineering (an aptitude) but the type (an attitude) of engineering we need to create novel activities is different from the type of engineering we need to run an industrial service. This is a structure known as Pioneers, Settlers and Town Planners - a derivative of commandos, infantry and police from Robert X. Cringely's Accidental Empires, 1993.

You can use maps for scenario planning, for determining points of attack (the WHERE of strategy,  WHY is always a relative statement such as why here over there) and you can even create a profile of a company, an industry or a market i.e a frequency count of where components are along the evolutionary scale. 

Why bother with a profile? 

The components in the profile are all evolving from left to right (i.e. competition drives them to more industrialised). If nothing novel was ever created then it would all (assuming effective competition) become industrialised. You can use a profile to determine the balance of your organisation and whether you need more pioneers, more settlers or more town planners (see figure 6).

Figure 6 - Profile


To give you an idea of how to apply the three party structure. I've covered this before (many, many times) but this diagram will help.

Figure 7 - Applying PST (Pioneer, Settler and Town Planner) to a Map



As with the need for using different methods whether purchasing or project management (agile for genesis, lean for transition, six sigma for industrialised) then the roles of pioneers, settlers and town planners are different. So, is their culture. So is the way they organise. So are the type of people. See figure 8.

Figure 8 - Characteristics of Pioneer, Settler and Town Planner




Ok, but what has this to do with the middle?

The settlers and the transitional part of the profile, that is your middle.

This form of structure is the only way I know of sustainably solving the Salaman and Storey innovation paradox of 2002. This isn't "nice ideas", this came from competitive practice over a decade ago and subsequent re-use. Why it worked so well could only be explained afterwards in 2007 when the evolution curve went from hand waving pattern to something more concrete.

Of course, the techniques of mapping and evolution have evolved themselves over the last decade. Mostly this is by refining the terms used to make the language more transparent to others or through more adoption. However, every now and then something new appears and the latest addition is from a post by Dan Hushon on 'Digital Disruptions and Continous Transformation'. If you're familiar with mapping, I recommend you go and read now!

There's an awful lot to like in Dan's post. However, to explain why it's important, you need to first consider that a map can also be used to provide information on business revenue flows i.e. you can investigate a map to identify different business opportunities (see figure 9)

Figure 9 - Business Revenue Flow.


Some of those revenue streams will be profitable, others not. But even with a profitable revenue stream then this won't remain static. Take for example the pipeline of content in figure 5 above from commissioned shows to acquired formats i.e. the evolution of a new show like X-factor to a broadly repeated format show. The nature of the show has evolved.

With anything new (i.e. genesis) then you'd expect to add a high margin because of the risk taken. As that thing evolves and becomes more widespread and defined,  then you'd expect the margin per unit should be reduced and compensated with a much greater volume in a functioning marketplace.

In his post, Dan looks at this from the point of view of value vs evolution and overlays a PST (pioneer - settler - town planner) structure. I've slightly modified this but kept very close to the original in figure 10. This is not a graph of how things are, it's a concept that hasn't been tested yet. But evolution would suggest that this is how things should operate.

Figure 10 - Modified version of Dan Hushon's Graph


Why do I like this graph so much? Well, think of your organisation. You have core transactions you provide to others (in order to meet their needs). Those core transactions consist of underlying components (which can be mapped) that are evolving, These components effect the profitability of the transaction. The transaction itself is also evolving. As it evolves, the margin (or value) created by each unit of transaction should also change and reduce. Novel transactions should show high value per unit (and high margins). Commodity transactions should show low value per unit (and low margins) over time. 

You can use this graph to create a financial profile of your organisation and where your transactions are. They should all be evolving along the arrow. Of course, as transaction evolve to more commodity then you should be looking for those higher order systems to replace them. This can be used to create a financial pipeline for change and whilst the ends are important, it's the middle we really need to manage, the flow from one to another.

Ok, I get this but why is the middle so important?

If it wan't clear, situational awareness is key to a vast number of aspects of managing an organisation and the ONLY way I've found to significantly improve situational awareness is to map and compare the present against evolution. 

Evolution shows us that we're living in a dynamic environment with the constant flow of change from the novel to the industrialised. This impacts everything from project management to purchasing to team structure to finance.

Whilst there are two extremes (genesis vs commodity) which require polar opposites in methods, culture and structure (Salaman and Storey, Innovation paradox, 2002) and everything (activities, practice, data and knowledge) in a competitive market (with demand and supply competition) evolves from one extreme to another ... there is a constant. That constant is change. The middle represents and governs this evolutionary flow from one extreme to another.

Yes, it's important to manage the extremes but both extremes can be outsourced either to suppliers or through the use of an ecosystem model, such as ILC.  Some firms will be able to hide in the relative safety of the industrialised space using ecosystem models to sense the future. Some will try and survive in the high risk space of constant genesis. For most of us then the one thing you need to manage above anything else is the flow between the extremes. This is where all the tactical plays matter. This is where situational awareness is critical. For most us, this is where continuous and sustainable advantage can be gained.

If you're going to build a two mode structure (bimodal, two speed IT, dual operating system etc) then focus on creating public APIs for utility services and get every other company to build on top of it. Let everyone else be your R&D labs and your pioneers. Hide in the industrialised spaces and use ecosystems as your future sensing engines. Remove pioneering internally through the use of a press release process. Use the skills of the settler to mine the ecosystem for new successful changes which you then industrialise with town planners to commodity components or utility services. Alas, there's only a limited number of firms who can play that game.

This is why I'm not a fan of bimodal, dual operating system, two IT speed concepts in general. Most of us have to cope with the entire spectrum of evolution. These two mode methods appear to focus the mind too much on the extremes. It doesn't matter if they've included the all important middle (the transitional part, the settlers) in one group or another. You've just buried that which truly matters.

The Settlers (like the infantry) are key. They make success happen. They control your destiny. They make the entire process sustainable. They play the most important games. They are where open source becomes a weapon. They are mostly ignored in favour of the extremes. Let us "take the pig and lipstick it" seems to be the mantra. Bolt on a digital side to our legacy and ignore that the digital will itself become legacy. What do we call the legacy then? Legacy Legacy with Legacy Digital and ... let us add some more lipstick?

Organising by extremes creates two opposing camps, conflict and bottlenecks are inevitable and something I've seen over the last decade. Yes, these two mode methods may give you a short term boost in terms of efficiency and innovation but it's the sustainability issue that's the problem. That's why I strongly suspect in a decades time, the same people telling you to become a two mode type organisation will be flogging you a three mode solution to your new two mode problems. Save yourself the trouble and another round of re-organisation.

On the upside, all the mapping stuff (2005 onwards), all the profile work, flow, the PST structure and ILC models (2007 onwards) are creative commons share alike. It's all scattered throughout this blog and has been presented at hundreds of conferences between 2007 to 2014.

That's an invitation to help yourself.

Improve your situational awareness. Learn to play the game.

You'll soon discover why the "middle" is the all important battleground.

Oh, and on the question of bias ... am I biased towards mapping? Yes! I've been using mapping to outplay other companies in many commercial markets for over a decade and more recently in the last four years within Governments. I'm as biased as you can get. In my world, situational awareness helps. Yes, it's also true that maps are complex. In some cases, you can spend a whole day to create a first map that you can use.


--- 18th March 2015

Added figure 7 just so people can make the jump between PST and a map.


Potterhood. by Feeling Listless

Film Kristin Thompson's written a rather brilliant (and detailed!) comparative study of Boyhood and the Harry Potter film series:

"Of all the series mentioned above, “Harry Potter” is most pertinent to Boyhood. Their production periods overlapped considerably, and their directors faced similar major challenges. This despite the disparity in their finances. “Harry Potter” had a huge budget supplied by Warner Bros., ranging from the lowest at $100 million for Chamber of Secrets to the highest at $250 million for Half-Blood Prince. (These are the budgets as publicly acknowledged, taken from Box-Office Mojo, which has no figures for the last two films.) Boyhood had a lean budget of $200 thousand for each of the twelve years and totaling about $4 million with postproduction and other expenses added in. Still, as we shall see, the challenges really had nothing to do with the budgets."
Previously. I love the nugget that the the IFC Center in New York ran all eight Potter films in a row as a homage to Boyhood. Kristin makes a convincing case for the two to be somehow inextricably linked, and the appearance of the Potter books in Boyhood suggests it is something Linklater was clearly conscious of.


Is the Opportunity Rover a Mission 'Whose Time Has Passed'? by The Planetary Society

The NASA Administrator declared that the Opportunity rover is a mission 'whose time has passed' and will be defunded next year. Will Congress act to save it?


March 15, 2015

Two Speed Business? Feels more like inertia. by Simon Wardley

These days I use maps of organisations to apply a cell based structure ideally using a Pioneer, Settler, Town Planner model.  The map is based upon a value chain (describes the organisation), how things evolves (describes change) and the changing characteristics of activities, practices and data from two extremes (the uncharted to the industrialised) - see figure 1.

Figure 1 - PST.


I use these maps across many business and governments to reduce duplication, bias & risk whilst improving situational awareness, gameplay, correct use of multiple methods & communication - see figure 2.

Figure 2 - Map of a Media Company.



Key to mapping is the evolution axis i.e. you can't actually anticipate change over time in any effective manner (we don't have a crystal ball). See figure 3.

Figure 3 - Evolution



A bit of HISTORY.

My early attempts to map an environment were a disaster because I attempted to use time and time based sequences to measure change. I eventually postulated a path for how things evolved and presented this at Euro Foo 2004, EuroOSCON 2006 and many conferences in between. - see figure 4.

Figure 4 - Early Evolution [from EuroOSCON 2006].


This path is what I used to create my early maps in 2005 (though James A. Duncan tells me it was 2004). The problem was, I couldn't demonstrate the path was based upon anything other than a concept. It took until 2007, to collect the data necessary to demonstrate that the evolution path (see figure 3 above) had some merit.

In those early years, I also used to refer to how different methodologies were required e.g. figure 5.

Figure 5 - Different Methodologies [from EuroOSCON 2006]


This was based upon the innovation paradox of Storey & Salaman, Theories about the process of innovation, 2002

”paradox is at the heart of innovation. The pressing need for survival in the short term requires efficient exploration of current competencies and requires ‘coherence, coordination and stability’; whereas exploration / innovation requires the discovery and development of new competencies and this requires the loosening and replacement of these erstwhile virtues”

But it wasn't until 2007 that I could clearly demonstrate the changing characteristics of activities, practices, data and knowledge as it evolved. This I have spoken about at literally hundreds of conferences and video talks to hundreds of thousands of people around the world using various examples (e.g. figure 6 and figure 7)

Figure 6 - Characteristics of Innovation (as in a novel thing) [from Web Strategies 2008].



Figure 7 - Characteristics of the same activity as it evolves to commodity [from Web Strategies 2008].


This also helped explain why the Pioneer, Settler (or what I used to call Colonist) and Town Planner structure worked. I had introduced this structure many years earlier (2005) to replace a failing two bit mode model before it - see figure 8.

Figure 8 - PST [from Web Strategies 2008].

The Pioneer, Settler and Town Planner model is a derivative from Robert X. Cringely's model of Commandos, Infantry and Police in the 1993 book, Accidental Empires.

Of course, some of the terms have evolved over the years to make them more meaningful whilst the basics remain as is.
  • I don't use Colonist anymore. I use Pioneer, Settler and Town Planner.
  • I don't use innovation anymore. I use the term genesis (of a novel and new thing) because innovation is so abused to mean everything.
  • I don't use chaotic anymore. I use uncharted to describe the unknown space.
  • I don't use linear anymore. I use industrialised to describe the defined space.
E.g. take a look at older presentations and you'll see many of the same concepts just with different terms.

OSCON 2010


Today, what amazes me is that companies are starting to realise that one size methods don't work. I hear talk of bimodal, two speed IT and dual operating systems for a company.

Why does this amaze me? 

There are a few exceptional leading lights from very early on - Robert Strassmann and Robert Cringely come to mind - but the leading edge of companies were exploring these topics around 2002-2008.  I know, I was there helping the discussion to happen. After this, the early adopters started to appear (Silicon Valley Startups etc). Increasingly I see this work in various Governments.

However, all of a sudden in 2014 - an old model, not cognisant of how things evolve, long thought dead has raised its head. The bimodal, two speed IT, dual operating system is becoming a very popular concept within what I describe as legacy companies. We're talking ten years late to the party and getting the wrong address. 

This feels like inertia. A hope that adding lipstick to the pig will somehow keep the pig alive.

A warning

Mapping, use of a three party system, the extremes of uncharted to industrialised are not new. Despite refinement to the terms, this stuff is all nearing or over a decade old. The field has moved on since then into understanding common economic patterns, weak signal analysis and other forms of organisational learning.

By the time something of use gets written in a book or a publication, the value related to it has often been extracted long before. You can't get to the bleeding edge of competition by reading books or publications. You have to live it. Even models like pioneers, settler and town planner are starting to feel dated and being examined through lenses of a continuous spectrum and other structures.

The laggards are nowhere near getting to the spaces where the leading edge have long since moved on from. 

The gap appears to be widening.

I used to think the gap between the leading edge and laggards was five to seven years, it now seems like fifteen or possibly much more (i.e. how long it will take the laggards to catch up with where the leading edge is today). This may be a normal consequence of inertia, an acceleration in the underlying cycle of change combined with poor situational awareness - of this, I'm not sure. It needs testing.

However, within the laggards it does feel as though the copying of easy to digest memes appears to be more rampant, circulated by those who benefit from the memes to those that need an 'easy solution' in a marriage of convenience. As Rosenzweig once said

“Managers are busy people, under enormous pressure to deliver higher revenues, greater profits and ever larger returns for shareholders.They naturally search for ready-made answers, for tidy plug-and-play solutions that might give them a leg up on their rivals. And the people who write business books – consultants and business school professors and strategy gurus – are happy to oblige.”

I see an awful lot of companies plunging down paths they shouldn't with little or no situational awareness. This isn't getting better for some. There's going to be an awful lot of casualties. 


March 14, 2015

A long journey ... but a happy one ... Karma is kind. by Simon Wardley

Seven years ago, I wrote the presentation below (one of many). I presented it at several large companies and then at a public Strategy conference in April 2008 to around 600 people. I got into several arguments.

So, how have things progressed in the last 7 years? Well, during the talk, I covered a subset of my research including :-
  • The cycle of commoditisation
  • The evolution curve (ubiquity vs certainty)
  • Evolution vs Diffusion
  • Three layers of IT shifting from product to utility
  • Why we had no choice (pressure for adoption, Red Queen)
  • Accelerators to evolution (network effects, open source, standards)
  • Polar extremes of an organisation (innovation to commodity)
  • Properties of those polar extremes
  • Why one size methods don't work and the need for different methods  (Agile and Six Sigma)
  • How to organise for these two extremes (Pioneers, Settlers and Town Planners)
  • Different investment / competition effects as activities evolve
  • The importance of a maps (though I didn't cover how to map).
  • The issue of bias
  • Creative destruction
  • The innovation paradox
  • Componentisation
  • How reducing non strategic cost is both friend (operational efficiency) and foe (reducing barriers to entry)

I also provided Four Predictions
  • An increase in the rate at which innovative services and products are released to the web.
  • More disruption in the information markets.
  • Increasing organisational pressure within IT.
  • The architecture will never be completed.

7 years later ... the predictions all hold and every item on the presentation is still relevant. 

In 2007 to early 2008, I was funding my own full time research. I burned most of my own life's savings in order to understand why the experiments which I had undertaken in Fotango during 2002-2006 worked. Of course, I made the concepts all creative commons - I work on a principle of Karma. 

Naturally the work has evolved since then - I've tidied up terms, made a few adjustments - but in principle it's all the same. There has however been some important moments of change. Whilst I used the mapping technique in Fotango (2005) and Canonical (2008), it wasn't until a good friend of mine - Liam Maxwell, who I had worked with (giving my time freely) on the 'Better for Less' paper which included many of my concepts - persuaded me to talk more openly about the topic did I gain the confidence to speak publicly on How to Map.

Despite my public speaking, confidence was an issue.

Back in 2007, in most business events then the ideas I expressed usually created a hostile reaction from  fruitcake, idiot, ridiculous, nonsense, delusional, sad waste of time, rubbish to gibberish. Hence the arguments. I'm not shy of a fight especially when it gets personal.

Back then it was all one size fits all, our method will solve everything, few understood evolution and even basic ideas of competition were rare. This does wonders for your confidence if you've just spent your life savings investigating something and the majority tell you that you've got it all wrong. I hoped that this was just an educational issue. I had to push slowly at the door.

But I'm made of stern stuff and so I continued. As more understood then I exposed more of the underlying concepts. Eventually, I had to find a job. Karma was kind and Canonical found me because of a discussion I had with Mark Shuttleworth on evolution. Even though the first five employees I met at Canonical in 2008 told me the 'cloud was just a fad', it changed. Today Canonical dominates the space.

These days all the topics are either considered conventional wisdom or well on the path to such. Even if companies do go off on the wrong track (e.g. two speed IT, bimodal) then I know that eventually they'll work it out. 

Today, I often listen to people speak at conferences and talk about the ideas that I once presented. The hostility is gone. The nodding of heads is commonplace. Knowledge also evolves but then I knew that. Everything evolves.

These days, any arguments are about who created the ideas first. Usually some consultancy firm is claiming them. This makes me smile. Most of the ideas come from long dead economists. I like long dead economists, they work cheap and don't grumble.

It's marvellous to see how things have changed.

Today, I work at the LEF where many governments and companies now fund me to research into even more marvellous areas. Had I never taken the risk, spent my life savings and given the concepts all away then that would have never happened. I'd never had worked for Canonical. I'd never have written the 'Better for Less' paper. I wouldn't have met the good friends I have today.

Karma is kind. 

Presentation



The slides are just an aid to the talk. At some point I'll record the video again (I have the original text) and it'll make more sense but the slides are enough for now.


Brief Update: It’s Cold Season by Albert Wenger

Apologies to my regular blog readers but I have been battling a nasty cold. So instead of getting up in the morning and writing a blog post I have been sleeping in a bit longer. I probably should have taken a couple of days off when I first got this cold a couple of weeks ago but have been kind of swamped at work and have ignored my own advice

Lots going on that I would like to write about though in particular I have been intrigued by the debate around Hilary Clinton using her own email account during her time as Secretary of State as well as the recent decision to retain state workers email for only 90 days in New York. If you want to discuss some of this, I suggest heading over to USV where Privacy & Security are the topic of the week.


Adding Churyumov-Gerasimenko to my scale comparison of comets and asteroids by The Planetary Society

Having found a color photo of the comet, I finally added Churyumov-Gerasimenko to my scale comparison of comets and asteroids visited by spacecraft.


If it's March, it must be LPSC by The Planetary Society

Next week is the 46th Lunar and Planetary Science Conference (LPSC), and Emily Lakdawalla will be attending to tweet and blog about news from Rosetta; Curiosity; MESSENGER; GRAIL; Chang'e 3; Dawn; New Horizons; Cassini; and more.


March 13, 2015

On Pioneers, Settlers, Town Planners and Theft. by Simon Wardley

I often talk about the use of cell based structures (e.g. think Amazon Two Pizza, Starfish model) which are populated not only with aptitude (the skill to do something) but the right attitude (type of people). A map is an essential part of building such a structure (see figure 1).

Figure 1 - Pioneers, Settlers and Town Planners.


The concept of Pioneers, Settlers and Town Planners is a derivative of Robert X. Cringely's idea of Commanders, Infantry and Police as expressed in the delightful 1993 book - Accidental Empires. The first time I used this structure was around 2005-2006.

Pioneers are brilliant people. They are able to explore never before discovered concepts, the uncharted land. They show you wonder but they fail a lot. Half the time the thing doesn't work properly. You wouldn't trust what they build. They create 'crazy' ideas. Their type of innovation is what we call core research. They make future success possible. Most of the time we look at them and go "what?", "I don't understand?" and "is that magic?". In the past, we often burnt them at the stake. They built the first ever electric source (the Parthian Battery, 400AD) and the first ever digital computer (Z3, 1943).

Settlers are brilliant people. They can turn the half baked thing into something useful for a larger audience. They build trust. They build understanding. They make the possible future actually happen. They turn the prototype into a product, make it manufacturable, listen to customers and turn it profitable. Their innovation is what we tend to think of as applied research and differentiation. They built the first ever computer products (e.g. IBM 650 and onwards), the first generators (Hippolyte Pixii, Siemens Generators). 

Town Planners are brilliant people. They are able to take something and industrialise it taking advantage of economies of scale. This requires immense skill. You trust what they build. They find ways to make things faster, better, smaller, more efficient, more economic and good enough. They build the services that pioneers build upon. Their type of innovation is industrial research. They take something that exists and turn it into a commodity or a utility (e.g. with Electricity, then Edison, Tesla and Westinghouse). They are the industrial giants we depend upon.

What you want is brilliant people in each of these roles.

How do you get things going within a company?

To Start
Determine (by mapping) an activity that is currently expressed as a product but is ready for industrialisation to a commodity and even better a utility service.  It needs to be suitable, widespread and well defined enough, the underlying technology should exist and there should be a frustration in the market with the current provision.

Now, find some town planners to build the more industrialised form for you (remember they are brilliant, smart and won't be cheap). 

You'll now need some Pioneers (genesis to custom)

The pioneers create the entirely novel and new (genesis) by consuming (building on top of) those industrialised components (it reduces cost of failure, increases speed etc). Ideally the pioneers are not just within your company, but more favourably contain outside companies building on top of your component. Hence, it's a really good idea to expose your newly industrialised form via a public API.

The more pioneers there are (both inside and outside) the bigger the ecosystem around your component is. This will not only give you greater economies of scale but you can mine the ecosystem to identify future success (it operates as a future sensing engine). If you don't know how, read up on the ILC model here.

Often what the pioneers produce is something just beyond a prototype (more towards a MVP). Often it has bugs or some other failings but it is useful. Within a company, you can incentivise internal pioneers on the creation of new things that are eventually turned into new products & services by use of an internal royalty.

Now, you need some Settlers (custom to product and rental services).

The job of the settlers is to identify common patterns in the ecosystem (whether just internal pioneers or internal & external).  This can be done by leveraging consumption data of the underlying components or simply inspecting a range of new activities for common elements or simply taking something an internal pioneer developed.  Once a pattern or activity is identified, the job of the settlers is to turn it into some sort of product i.e. they steal from pioneers and productise it (make it manufacturable, documented, profitable, stable etc). You can incentivise settlers by product profitability and by which products make it to utility services.

A key role of the settlers is to steal from the pioneers who are in effect acting as the settlers R&D centre. Sometimes an internal project is going nowhere and the settlers will steal it, replace with something from the outside market. There is a problem here in that settlers can sometimes want to keep something going for too long. Also if you're using an ecosystem to identify future success then remember, you're are the gardener of that ecosystem. If you harvest too much, it'll die off. So think carefully - you need to harvest and nurture. 

Now you need some ... wait ... you've already got these ... Town Planners (product / commodity and utility services)

The job of the town planner is to build the core, volume operations based, good enough, ultimately (long term) low margin but highly industrialised services & commodity components. They use the portfolio of the settlers, any consumption data, comparison to outside to determine whether something is suitable. Once identified they build the core service and inform the other groups (e.g. you tell pioneers so they have something to build upon, settlers so they know that this part of the portfolio is being industrialised). You incentivise town planners on volume and breadth of service offerings. You actively encourage them to cannibalise your own business as fast as possible.

By creating this virtuous cycle which is incentivised so that each group steals the work of the former - town planners stealing from settlers who steal from pioneers who build on the work of town planners - you can accelerate innovation, customer focus and efficiency all at the same time and remove the threat of inertia to change.

Some things to note.

Each group innovates but innovation is not the same for each group i.e. the innovation of an entirely new activity is different to the feature differentiation of a product which is different from converting a product to a utility service. Unfortunately, despite being different forms of innovation that won't stop people pretending there's only one and it's all the same. Try not to do this.

Each group has different attitudes though aptitudes (e.g. broad skill description such as finance, engineering or marketing) might be the same. Engineering in the pioneering group is not the same as engineering in the town planners.

Each group has a different culture. I hear endless stuff about company culture. Go ask special forces, infantry and police whether they have the same culture or not.

On Disruption
  • Pioneers don't disrupt. There is nothing to disrupt.
  • Settlers sometimes disrupt (as in product to product substitution in an industry). This is unpredictable.
  • Town Planners often disrupt past industries. This can be anticipated quite a time in advance.
  • There are two forms of disruption (predictable and non predictable) but that doesn't stop people pretending there is one.

Some Exceptional Models

Whilst building two party systems e.g. pioneers and town planners usually cause huge headaches, there is one exception.

IF (and only IF) ...
  • You are using an outside ecosystem to act as your pioneers (e.g. providing utility services through public APIs and mining consumption data)
  • You introduce some form of forcing function to prevent novel activities being created internally e.g. introducing a requirement to create a press release before writing a business plan or prototyping anything. [Hint : people can't write press releases for novel, uncertain activities which haven't been explored yet]

Then in these exceptional circumstances you can build a settler and town planner structure by outsourcing your pioneering capability to the outside world. You're really still three party but you've outsourced one part.

Whilst one party systems e.g. one size fits all usually cause huge headaches, there is one exception.

IF (and only IF) ...
  • You are using an outside ecosystem (consumption) to act as your pioneers
  • You are using an outside ecosystem (supply) to act as your town planners
Then in these exceptional circumstances you can build a settler structure by outsourcing your pioneering capability to the outside world along with the component supply chain. In this type of structure you purely manage and exploit the flow from the uncharted to industrialised. You're still really a three party structure just you've outsourced two of the parts.


Terry Pratchett by Charlie Stross

Friendship is context-sensitive.

I wouldn't describe Terry as a friend, but as someone I'd been on a first-name acquaintanceship with since the mid-1980s. If you go to SF conventions (or partake of any subculture which has regular gatherings) you'll know the way it works: there are these people who don't really see outside of this particular social context, but you're never surprised to see them in it, and you know each other's names, and when you meet you chat about stuff and maybe sink a pint together.

I haven't seen Terry since the Glasgow worldcon in 2005. The diagnosis of his illness came in 2007; I'd been spending a chunk of 05-07 out of the country, and after the bad news hit I didn't feel like being part of the throng pestering him (for reasons I'll get to later on in this piece.)

I first met him, incidentally, back in 1984, at a British eastercon in Leeds. It was, I think, my first SF convention. Or my second. I was a spotty 17- or 18-year-old nerd, wandering around with a manuscript in a carrier bag, looking for an editor—this was before the internet made it easy to discover that this was not the done thing, or indeed before word processors made typewritten manuscripts obsolescent. (Let's just say that if in a fit of enthusiasm you borrowed your future self's time machine and went back to that convention in search of me you'd have been disappointed.)

There were plenty of other embryonic personages floating around there, of course. I remember meeting this tall goth dude with shaggy hair, dressed all in black and wearing mirrorshades at midday, who resembled the bassist from the Sisters of Mercy. He was called Neil, he wrote for a comic called 2000AD, and he had an oddly liminal superstar quality even then: everyone just knew he was going to be famous, or in a band.) And there was this thirty-something guy with glasses and a bushy beard propping up the bar. What set him apart from the other guys with beards and glasses was that he had a hat, and he was trying to cadge pints of beer with an interesting chat-up line: "I'm a fantasy writer, you know. My third book just came out—it's called 'The Colour of Magic'." So you'd buy him a drink because, I swear, he had some kind of bibulous mind-control thing going, and he'd tell you about the book, and then you'd end up buying the book because it sounded funny, and then you were trapped in his snare forever.

Back then, Terry was not some gigantic landmark of comedy literature, with famous critics in serious newspapers bending over to compare his impact on the world of letters to that of P. G. Wodehouse. Terry was earning his living as a press officer and writing on the side and didn't feel embarrassed about letting other people pay for the drinks. And so over the next few years I bought him a pint or two, and began to read the books. Which is why I only got hooked on Terry's shtick after I'd met him as Terry the convention-going SF fan.

Some time between about 1989 and 1992, something strange began to happen. I started seeing his name feature more prominently in bookshops, displays of his books planted face-out. He started turning up as guest of honour at more and more SF conventions. When a convention did a signing with Terry, suddenly there was a long queue. And when he walked into a room, heads turned and people began to close in on him. There's a curious phenomenon that goes with being famous in a particular subculture: if everybody knows you, you become a target for their projected fantasy of meeting their star. And they all want to shake your hand and say something, anything, that connects with what your work means to them in their own head. (If you want to see this at work today, just go to any function he's appearing at—other than the Oscars—and watch what happens when Neil Gaiman walks into the room. He is, I swear, the human Katamari.)

Being on the receiving end of this phenomenon is profoundly isolating, especially if you're one of those introverted author types who can emulate an extrovert for a few days at a time before you have to hide under the bed and gibber for a while: you're surrounded by strangers who desperately want to connect with you and after a time it becomes really hard to tell them apart, to remember that they're individuals with their own lives and stories and not just different faces emerging from the surface of a weird shape-shifting fame-tropic amoeboid alien. It's not just authors who get this: if anything we get off very lightly compared to actors, politicians, or rock stars. (For some insight into it, go listen to the lyrics of Pink Floyd's "The Wall".) I should add, this sort of introversion is really common among writers. It's an occupation that demands a certain degree of introspective self-absorption, alongside a constant distance from the people you're observing, who—they mostly don't know this, of course—may provide the raw fuel for your work. So, if you want to hang on to your sanity, eventually you either go and hide for a bit, or you surround yourself with people who aren't faintly threatening strangers who want a piece of your soul. Which is to say, you selectively hang out with your peers, or folks you met before you caught the fame virus.

Terry was not only a very funny man; he was an irrascible (and occasionally bad-tempered) guy who did not suffer fools gladly. However, he was also big-hearted enough to forgive the fools around him if they were willing to go halfway to meeting him by ceasing to be foolish at him. He practiced a gracious professionalism in his handling of the general public that spared them the harsh side of his tongue, and he was, above all, humane. As the fame snowballed, he withdrew a bit: appreciating that there was a difference between a sharp retort from your mate Terry at the bar and a put-down from Terry Pratchett, superstar, he stepped lightly and took pains to avoid anything that might cause distress.

Anyway, this isn't a biography, it's just the convoluted lead-in to an anecdote about the last time I saw him (which was a decade ago, so you'd better believe me when I say our relationship was "situational friend" rather than "personal friend").

On the last day of the worldcon in 2005, I was wandering around feeling extremely frazzled and a bit hunted. I'd just won my first Hugo award, and my right hand was sore from people I didn't know grabbing it. Eventually I realized that I just couldn't cope with the regular convention concourse in the conference centre—I was a walking target of opportunity for people who wanted to shake the hand that held the pen that wrote the ... something, I guess.

At a British worldcon, you can count on there being a really excellent real ale bar tucked away in a corner of one of the hotels or fan areas. I headed for the real ale bar and found a degree of comfort and shelter there, because it was mostly full of familiar faces who didn't need to push into my personal space because I was just some guy they'd been bumping into in convention bars for a decade or two. The rate of hand-grabbing dropped to a survivable level: I began to relax, and found a couple of old friends to hang with. And then I noticed Terry.

Terry had not won a Hugo. He didn't need to. (As he said, "I was in the audience at some literary awards ceremony or other with J. K. Rowling one time, and she was lamenting how they'd never give her one, so I turned to her and I said, Jo, me neither: we'll just have to cry ourselves to sleep on top of our mattresses stuffed with £20 notes." Money being, of course, the most honest token of appreciation a commercial author can receive.) Terry didn't need a shiny new Hugo award to find it nearly impossible to walk around a convention and just be a fan: I was getting my first taste of the downside of fame, but Terry had been living with being Terry Pratchett, OBE, Richest Author in all the Land, for more than a decade. He was looking tired, and morose, and a bit down in the dumps. So we went over to say hi.

At this point, he perked up. Omega, who I'd been chatting to, had first met him in the mid-80s, about the same time as me: Feorag got a pass for being married to one of us. He'd been having a hard time being Terry Pratchett in public for five consecutive days. He wasn't quite ready to go and hide out in his hotel room, but he needed some respite care from being a Boss-level target in every starry-eyed fan's first-person autograph shooter; so, as it was coming up on lunchtime, by mutual agreement we dragged him away from the SECC to Pancho Villa's in Glasgow for lunch. Okay, Glaswegian-Mexican food is not what you'd necessarily call good good. But it filled a corner and, more importantly, it got him far enough away from the convention to decompress a little in company that wasn't going to place any demands on him.

Now, Terry (like the late Iain Banks) seemed to feel a bit of noblesse oblige (or maybe just plain survivor's guilt) over the sheer mind-boggling scale of his success. ("I realized I was rich," he recounted, "when I got a call from my agent one Thursday. That cheque I mailed you—did you get it? He asked. And I realized I couldn't find it: lost down the back of the sofa or something. Can you cancel it and mail me a new one? I said. And he said, yes I can do that, but you realize you won't be able to deposit it before next week and you'll lose the interest on it? And I said sure, just go ahead, cancel it, and send me a new one. Then I put the phone down and realized it was for half a million pounds.") Things had obviously changed since the days when he had to cadge drinks off fans in convention bars: and I realised that I hadn't bought him a pint since about 1989, and this rankled a little bit. Nobody likes to think of themselves as a charity case. Also, I'd just won a Hugo and landed a new three book deal and was beginning to feel a bit of that survivor's guilt myself.

So at the end of the meal, while he went to the toilet, I tried to pick up the bill. But the waitress was slow, he got back to the table before she could make off with my credit card, and when he pulled out his gold visa card, snarled "who's the rich bastard here?!?", and chuckled to himself, I knew I was beat. And I never did get to buy him lunch, in the end.

Anyway, those are some of my memories of Terry Pratchett.

He was generous not just with money, but with his soul. He was irrascible, yes, and did not suffer fools gladly: but he was empatic as well, and willing to forgive. Witty. Angry. Eloquent. A little bit burned by his own fame, and secretly guilty over it, but still human. And the world is smaller and darker without him, and I miss him deeply.


Bundle of Laundry by Charlie Stross

Some of you may be aware that there's a tabletop role-playing game set in the Laundry Files universe, sold by Cubicle 7 Games.

It's available on paper, and as PDF downloads via the usual folks (such as DriveThruRPG).

Anyway, there's a special promo for the next couple of weeks; Bundle of Holding, who do humble bundle style sales of RPG materials, are doing a special Bundle of Laundry offer. For $8.95 or more, you get the core rule book and the player's handbook as PDFs; if you pay more than their median price (currently $24.32) you get a whole bunch of extra supplements—basically the entire RPG for under $25 (or about £16.50 in real money). Oh, and this stuff? Is all DRM-free.

So if you've had a vague yen to dust off a tabletop RPG for an evening's fun with friends, why not see if you, too, can survive your training as a Laundry operative without losing your mind?


Turbulent Deaths by Astrobites

Title: The Three Dimensional Evolution to Core Collapse of A Massive Star
Authors: S. M. Couch, E. Chatzopoulos, W. D. Arnett, F. X. Timmes
First Author’s Institution: TAPIR, Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA
Status: Submitted to ApJ Letters

 

There are hordes of them out there.  Giant behemoths that masquerade as massive stars, but that never birthed a single radiant photon nor fused a pair of hydrogen nuclei.   All of them are found on Earth.  Instead of atoms, they’re built of up strings of 0’s and 1’s and live in computers of all shapes and sizes across the world (though they’d prefer the roomier accommodations of supercomputers, if you ask them). Inspired by their real, yet enigmatic counterparts in the physical universe, we first brought them to life in simple, spherically symmetric, one-dimensional form.  We quickly bestowed them full three-dimensional complexity and an increasingly comprehensive set of input physics as soon as our computers possessed the computational brawn to handle them, feathery convective plumes and other instabilities, mottled compositional complexions, freewheeling invisible neutrinos, and all.

All for one goal: to understand how they die. We’ve observed their real counterparts supernovae over and over and over again, at a rate of about one every five seconds, bursting in galaxies near and far, their spectacular showers of photons (sometimes rivaling that of an entire galaxy) often traveling cosmic distances to reach our army of telescopes on Earth.  We’ve sought to replicate these supernovae in our virtual massive stars, giving them encouraging nudges by injecting extra boosts of energy-imparting, explosion-inducing neutrinos, giving them a bit of a spin, tweaking how their mass is distributed, sending in sound waves, draping them with magnetic fields.

And yet.  Despite the care with which we’ve crafted them, our virtual massive stars almost always refuse to explode.

 

What might we be missing in our theories of massive stellar death? It’s a question we’ve been asking for decades. Instead of focusing on the properties of the stellar cores that collapse and usher in their deaths, the authors of today’s paper instead turned to consider the life of the star preceding. They were motivated particularly by hints that the silicon-burning shell surrounding the pre-collapse core could be violently turbulent, stirred by convective motions in the shell. The authors thus concocted a new star, one 15 times the mass of our Sun. They harnessed the power of MESA, a special-purpose code built specifically for modeling the life of stars in 1D, from their early lives burning hydrogen, then helium, carbon, all the way to silicon and the formation of an iron core, about three minutes shy of core collapse. In order to focus on the effects of a convection in a silicon-burning shell, they stripped their stars of any complexifying qualities: no rotation, no magnetic fields.

Figure 1. Development of convection pre-collapse.

Figure 1. Development of convection pre-collapse.  The ring-like structure seen here is the silicon-burning shell within the star the authors modeled.  A map of the amount of silicon (by mass, top left) reveals intricate swirls and eddies caused by convection in the shell about 2.5 minutes before core collapse.  Bottom left: a map of where burning occurs.  Bottom right: the distribution of iron (by mass), mostly found at the center of the star.  Top right: the speed of the gas (redder is faster).  For a video showing how the convection develops, see the movie the authors made here.

At this point, however, their star possessed no convection, which cannot develop in 1D. Thus the authors turned to FLASH, a powerful hydrodynamics code that can follow the evolution of the complex gas motions that give rise to convection in stars. And this time, they let the star evolve in 3D. At the end of this, they had a star with a fully convective silicon-burning shell (see Figure 1), replete with characteristic convective plumes—spectacular ones that spanned the entire width of the silicon-burning shell and churned at velocities of several hundreds of kilometers a second, whirling around an 1.3 solar mass iron core on the verge of collapse.

And then, of course, came the collapse. The authors exploded two stars, twin stars, identical in every way except that one lived in one dimension and was thus spherically symmetric, while the other lived in three (though because of the computational complexity they modeled only an octant of the star) and thus retained its convectively-stirred, complex 3D structures. To help the stars explode, they were given identical shots of extra energy in the form of neutrino heating, then let go. And go they did—and differently in some key ways. Their cores initially evolved much in the same way: they collapsed, rebounded, giving birth to a shock, both of which successfully continued to grow. When the shock reached the silicon-burning shell, substantial differences began to show: the 3D convective star’s shock grew more rapidly than its 1D twin, and had a larger explosion energy. Though the authors did not evolve the collapse long enough to determine whether or not the star eventually exploded, these were promising signs that an explosion could be achieved more readily.

So does this mean that we now have it—the secret to the deaths of massive stars? Not quite. Many assumptions and simplifications—the initial 1D models, the 3D octant of the star, to name a few—were made. But while these new models were necessarily contrived, given the limits of today’s computational brawn, they are still an instructive demonstration that the turbulent environments in which the cores of massive stars breathe their last can affect how the rest of the star’s death plays out.


An internal ocean on Ganymede: Hooray for consistency with previous results! by The Planetary Society

A newly published paper confirms a subsurface ocean at Ganymede. An ocean there was already suspected from its magnetic field and predicted by geophysics; new Hubble data confirms it, and even says it is in the same place we thought it was before. Such consistency is rare enough in planetary science to be worth celebration.


In Pictures: Expedition 42 Crew Returns to Earth by The Planetary Society

Three International Space Station crew members are back on Earth today following a morning Soyuz landing on the snowy steppes of Kazakhstan.


March 12, 2015

Wardley Map - a set of useful posts. by Simon Wardley

Ok, for those who want to learn how to map, I've provided a few links to what I consider useful posts.

A quick video (speaking is my preferred medium)


A longer and more detailed version.




Various Posts.

NB Some use the older terms Chaotic & Linear which I changed to Uncharted & Industrialised (more apt). I've put this list here (there's about 950+ posts on this blog). I'll try and add more, tidy things up and you never know ... persuade someone else to turn it into something readable.

Basics.
An introduction to Wardley Maps
A step guide to mapping
On creating a value chain
Getting stuff done.
A guide to mapping.
From strategy to mapping to pioneers (slides)
The good bits about mapping
The amazing bits of mapping
The wow of mapping

Concepts
Evolution
Properties of evolution
Inertia
Componentisation
Componentisation II
There are many chasms to cross
Co-evolution
The Two Extremes
Oh, no Six Sigma vs Agile
Early failures
When to use a curve
On services

Patterns.
Of Perils and Alignment.
Revolution
Company Age
Ecosystems
On Ecosystems and Porter.
Punctuated Equilibriums
Consumerisation.

On the two forms of disruption
What's right and wrong with Christensen.
On evolution, disruption and the pace of change.
Does maturity matter
Ten graphs on organisational warfare.

Operations
Basics of operation
Let the Towers Burn
Government, Purchasing
How we used to organise ourselves
Pioneers, Settlers and Town Planners
(another post on PST)
Two speed IT / Bimodal.
Other Tools I use with Mapping
Business Model Canvas ... the end of a long journey
Maps and the Target Operating Model.
Maps are imperfect but that's ok.
Rough guide - use cloud, build cloud and microservices.
On maps, components and markets
This is not the data you are looking for.
Why no consultants.

Strategy
Open Plays
Open source, gameplay and cloud
Scenario planning
Strategy vs Action
On disruption and executive failure.
Epic Fails of Sensible CEOs
Tower and Moat.
On Chess and Business
Preparing for War
Dungeons and Dragons vs The art of Business.
On D&D and Ant Battles.
SWOTs
Quick route to building a strategy.
Four basic smackdowns on competition
Does commoditisation lead to centralisation.
Fast Follower Conundrum.
Attack, defend and the Dark Arts.
Self disruption and super linear.
On the death of great companies.
The interesting thing about cutting costs.
Jevons in a nutshell

General Stuff
A useful summary post
Project, Products, Open source and Proprietary.
Context, situation and components.
Composability.
Is war the mother of invention?
The abuse of innovation
Half completed book

Final Last Few
Continuous and Sustainable Competitive Advantage comes from Managing the Middle not the Ends
Two Speed Business? Feels more like inertia.
On Pioneers, Settlers, Town Planners and Theft.

There's also the team at WardleyMaps (I'm not affiliated with) who are trying to turn my long and sometimes rambling concepts into something readable.

For reference, all my writings are creative commons 3.0 share alike licensed, as is the entire mapping concept, maps and evolution diagrams.


Cover Reveal by Charlie Stross

I have a new book coming out in the first week of July: it's The Annihilation Score (UK ebook link), and here's the cover Orbit have done for the British edition!

Annihilation Score cover

And in case that's not enough, because it's published on both sides of the pond, here's the US ebook edition, and the American cover art:

Annihilation Score cover

If you detect a certain violin-theme running through both covers, you'd be perfectly right. Because this may be the sixth Laundry Files novel, but there's a new twist: this one isn't about Bob, it's about Mo. And superheroes. And a certain bone-white instrument ...


Liverpool Bus Hack. by Feeling Listless

Liverpool Life Lately for various reasons this past couple of years I've been getting the bus home from the Liverpool One bus station, the 80, the 80s, the 75,the 75a, the 76 and during rush hour such services, despite me only living at Sefton Park, because of rush hour and the process of getting through the traffic in the city, and collecting passengers at the various stops can take up to fifty minutes if I get on at 4pm.

Except this month and next, from the beginning of March to the end of April, due to gas maintenance work on Hanover Street, these buses are not at Liverpool One. They're picking up and dropping off at Great Charlotte Street. Coming into town this means I've been getting off at the old Lewis's building and walking down Raneleigh Street into Hanover Street and onwards to my destination, which I've quite enjoyed, especially being able to pass through WH Smiths at Central (underground) Station on the way.

Coming home, being an entirely lazy human and wanting to avoid the crush of Great Charlotte Street in rush hour and all the "hey, there's a queue here" moans which come with that bus stop, I decided to throw out convention and catch a 27 bus, the Sheil Road circular to close to home and walk from there instead. I assumed this would cock up the whole routine, making the process of coming home even more taxing.

Um, no. It's not worked out that way. In fact, I can't imagine why I didn't think of this before.

Catching the 27 cuts out the whole of the city centre. Travelling towards Parliament Street thence to Park Road to Princes Avenue takes about ten minutes, twenty minutes shorter than it takes to get through the city street on the edge of being engulfed by rush hour traffic. Then, being as I said inherently lazy and with a Day Rider or Saveway-type ticket, I've jumped off the 27 and onto a 75 which has taken me to my usual bus stop near home.

A home I've now getting to a full half hour less than I have for the past two years on those days. Like I said, I can't imagine getting a different bus from there now.

Some notes:

-- This can only work with these buses. Even when they return, this will not save you from the murder of the 86 or 86a. Unless you get the 27 to Princes Avenue swap to a 80 or 75 then swap again on Smithdown Road at the stop near the post office. This seems like it could be unnecessarily complicated, but I guess there are probably enough 86s on the roads that the wait times will still be shorter than the mess of getting out of the city centre on a busy day.

-- This happened on Monday:


On Tuesday I missed the 27 (though they run about every five minutes) and this queue, which looks like a daily occurrence seems actually to have been for the X1, the Runcorn express bus which goes up the Dock Road to Aigburth Road and seems to be the way that people who'd usually get the 82 skip the city centre themselves. Quite how the gentleman expected me to know this, I'm not sure.


Triangles come in to land by Goatchurch

In spite of being up to lots of things, I’ve not been very interested in blogging of late.

I got my first flight of the year — a 3 minute top-to-bottom that began with a nil-wind terror swoop on take-off, followed by my almost forgetting to unzip the harness on landing due to being distracted by the sight of ducks paddling around in one corner of the water-logged field.

landingshadow

Here’s the data stream from the landing.
landingdata
Vertical lines at 5 second intervals. Yellow for barometer (air pressure rises as I descend), red for airspeed, cyan for GPS ground speed (seeming to correspond), white accelerometer pitch measurement, showing the pathetic flare coming into landing when all the speed drops off. The previous hump may correspond to the final approach turn (you have to push out to tighten the turn to a turning circle of about 35metres).

takeoffdata
Here’s the take-off sequence, with a slight push-out which was not held long enough, so I dropped very fast. The yellow for the barometer briefly goes below the starting value showing that at one time I got a bit of lift and could have been almost half a metre above take-off.

All in all, quite disappointing, but I’m glad to have some data to work with from my electronics device. I’m going to really appreciate the next flight when I stay up for a bit.

hgelectronics
Oh yeah, here’s a close-up of the one corner of my dog’s breakfast electronics project.

boxprinting
Luckily, 3D printers can print anything — including the abominable box I’ve “designed” in OpenSCAD to cram that electronics stuff into.

machtooldoes
Meanwhile, all this will probably be shelved due to this widget showing up in the hack-space this lunchtime. More later.


Eta Carinae’s New Dimension: 3D Printing Meets Computational Astrophysics by Astrobites

 

A new dimension

In a sense, our view of the Universe is two dimensional. We are relatively confined to our cosmic location, and can only observe the passage of minuscule spans of time compared to the timescales of most cosmic processes. As we look out into space, we see light from stars and galaxies as a 2D sheet stretched over the inside of a giant spherical shell, and are to unable spin around a supernova remnant to view it at different angles or click fast forward to view the entirety of a galactic merger event. Modern technology has helped to alleviate this constrained view, as computer simulations allow astronomers to view these events as more than a static picture on the sky and gain knowledge of how they evolve in time. Today’s paper utilizes computational astrophysics to examine a time-dependent event, but also incorporates another technological advance into their research arsenal. The authors brought a new dimension to their results by including 3D interactive features into their journal publication and creating the first 3D prints of an astrophysical supercomputer simulation’s output.  From rocket engines to replacement human organs, applications of 3D printing have offered enormous potential for technological advance, and as this paper shows, can also lead to new discoveries in astrophysics.

 

Eta Carinae – A big fan of stellar winds

Eta Carinae is a poster-child of stellar explosions.  In the 1840s, this massive binary star system produced the biggest non-terminal stellar explosion ever recorded, which made it the second brightest object in the non-solar-system sky and later produced the famous Homunculus nebula.  Though its light has dimmed since then, its attention in the astrophysical community has certainly not. Massive stellar binaries such as Eta Carinae are relatively rare in our galaxy, and the interaction of these behemoths is a test bed for interesting physics.

Because they are so luminous, both stars have powerful radiation-driven stellar winds.  In fact, the more luminous primary component of the system (ηA) has one of the densest known stellar winds, whereas the less luminous member (ηB) has extremely fast winds that are believed to fly at a swift 3000 km/s. The authors explored the region where the stellar winds of the two stars collided and interacted, known as the wind-wind interface region (WWIR).  This violent collisional interface results in strong X-ray emission but is tough to model; with a short period of ~5.5 years and high eccentricity of ~0.9, the orbital motion of the system greatly affects the geometry and dynamics of this region. Smoothed particle hydrodynamic (SPH) simulations were used to model the colliding winds, which performs hydrodynamics by using many smeared-out interacting particles.

Screen Shot 2015-03-11 at 11.22.13 AM

Figure 1 – 3D view of Eta Carinae’s WWIR at apastron (left column), periastron (middle column), and 3 months after periastron (right column) from SPH simulations. For each orbital position, the top and bottom rows show different spatial orientations (rotated 180 degrees on the z-axis). The images show a top-down view of the orbital plane, and also include the 3D surface of the WWIR that exists above the orbital plane. Color represents the density of the wind. The stars are scaled up in size, and key features are labelled. Credit: Figures 1-3 in Madura et. al 2015.

3D visualizations of the simulation were created for 3 specific phases of the orbit: apastron (furthest orbital separation), periastron (closest orbital separation), and 3 months after periastron. Figure 1 shows 2D slices of the wind-wind interface at these three orbital positions, but if you have Adobe Acrobat Reader, you can see the real deal 3D representation.  Though the interface region at apastron (figure 1, left column) is by far the “cleanest”, it is still riddled with protrusions due to Kelvin-Helmholtz and non-linear thin shell instabilities (instabilities that arise from a thin wind being bound by shocks on both sides). The visualizations at periastron (figure 1, middle column) and 3 months post-periastron (figure 1, right column) are far more complicated, but also far more intriguing, and uncovered features that were unnoticed by previous studies using 2D visualizations. One feature is a clear “hole” in the trailing arm of the WWIR.  The authors believe that this is due to the high eccentricity of the orbit; when ηB‘s orbital dance brings it closest to the massive ηA, it becomes embedded in ηA‘s dense wind and its own wind is forced out in a confined direction.  Figure 2 shows the cavity being blown out in the WWIR by mapping the wind speed and density for a small space around the stars in the simulation (this can be seen much clearer by the interactive video, which can be found in the paper). The complex post-periastron geometry contained another surprising new feature uncovered by 3D analysis – protruding tubes in the WWIR of ηB‘s hot wind encased in thin shells of ηA‘s dense wind (see figure 1, right column). About 2 dozen of these “fingers” were produced in the WWIR, and projected out several AU.  The study suggested that the fingers are results of instabilities in the colliding winds such as thin-shell or Rayleigh-Taylor instabilities, though if the fingers are in fact real not even Hubble’s sharp eyes would be able to spatially resolve them.

Screen Shot 2015-03-10 at 5.39.33 PM

Figure 2. The density (left) and speed (right) of the stellar winds at periastron. The wind velocity vectors are overlaid on both plots to show the magnitude and direction of the wind. The length of the arrows is proportional to the magnitude of the wind speed. To better see the evolution of the wind, see the short movie embedded in the paper. Credit: Figure 7 in Madura et al. 2015.

 

Holding a star in your hands

3D-prints have been made of astrophysical objects before (even of Eta Carinae), but all have been rendered based on observations and not simulations. This study created the first 3D models from the output of simulations, which allowed even better examination of the system’s detailed geometry and its variations over time.  Though the intricacies of the models led to a broken finger (not a human finger, thankfully), it was nothing that a little glue couldn’t remedy. Figure 3 shows one of the 3D prints of Eta Carinae’s WWIR.

Screen Shot 2015-03-10 at 5.41.17 PM

Figure 3. 3D rendering and 3D printed model of the WWIR at periastron. 1 inch corresponds to ~35.5 AU. Credit: Figure 9 in Madura et. al 2015.

This paper stresses the importance of incorporating 3D interactive figures into PDF journal publications, and the benefits of using 3D visualizations and models to best understand complex, time-dependent astrophysical phenomena. 2D figures are dominant in the literature because of the need to display 3D data in a classic paper-journal format, but now that most astronomical journals are available online this need is becoming obsolete. Though still in its nascent stages, 3D printing in astronomy offers a new and interactive way of communicating complex cosmic systems to the public, and even offers a medium to better describe the wonders of astronomy to the blind and visually impaired. Maybe one day soon a subscription to the Astrophysical Journal will come equipped with a pair of 3D glasses.


NASA and Orbital ATK Complete SLS Booster Test (Updated) by The Planetary Society

A blast of fire and smoke lit up the hills of Promontory, Utah this morning as NASA and Orbital ATK completed a test firing of a Space Launch System solid rocket booster.


March 11, 2015

My Favourite Film of 2005. by Feeling Listless



Film The other key film during my film studies days was Don Roos's Happy Endings which was the reason I ended up writing a dissertation about hyperlink cinema. The original subject was metafiction in Woody Allen's films, notably Annie Hall, Deconstructing Harry and one other which I could never quite decide on, but after reading around on the topic I realised that very little had been written about the topic in film terms at that time which meant I'd spend most of my time reading literary criticism and I didn't want to do that. Luckily I happened to be reading Jason Kottke's blog one afternoon and noticed him writing about hyperlink cinema and the article's he linked to became the backbone of what I'd spend the rest of that summer writing and led to the ability to say that I actually wrote about Richard Curtis's Love Actually for my dissertation.  Actually.  The guts of what I wrote about that film is here and I had planned to write something just as long about why Happy Endings isn't rubbish, but having reviewed the chapter it turns out I wrote a lot more about what Curtis did than either Altman (in Short Cuts) or Roos presumably because there was a lot more to say about it but also because I sensed, I think, that Happy Endings isn't really a hyperlink film, but an ensemble piece more akin to Hannah and her Sisters or Parenthood with its familial connections and the like. Expect spoilers.

Narrative.

The narrative in Happy Endings is closer to the more straightforward structure presented in most ensemble films with just three stories running in parallel. Don Roos identifies his work as a comedy ‘but obviously gets deeper’ (Johnson, 2005) and is not tied to a familiar generic story pattern. Quart confers hyperlink cinema status on the film because of the complexity with which those stories are told: ‘Roos takes the baggy plotting of the Altman picaresque into web territory: in Happy Endings playing games with time and personal history are a given’ (Quart, 2005: 48). Roos’s motivation for using the form was similar to both Altman and Curtis: ‘There’s not one story line that has to deliver everything […] because you have several stories, the audience can be freshened up. They can feel different things as they go from story to story’ (Johnson, 2005). The film opens with Mamie running into the path of a car, flashing back to the moment when she and Charley conceived their son and the aftermath, then forwards again to the scene when Mamie meets Javier for what appears to be a regular meeting. From here the film unfolds fairly conventionally, with three forms of disruption occurring in the set-up – Nicky blackmails Mamie into helping him make a film so that he can get the information about her son, Charley begins to suspect that Max might be his partner Gil’s son and Otis introduced Jude to his life, his house and his father. The connections between the characters are obvious from then on because Roos was wary of trying the force the connections: ‘I don’t like it when they all kind of connect co-incidentally at the end. Like Crash (2005) they all connect to something and I prefer it when its casual’ (Roos el al., 2005).

Narrative density is increased however because of the employment of non-diegetic captions that interrupt the mise-en-scène, presenting information regarding the characters and story outside of exposition within dialogue. The first instance is after the Mamie’s shocking motor accident to explain to the audience that ‘She’s not dead. No one dies in this movie, not on-screen. It’s a comedy, sort of.’ These effectively introduce an extra level of subtext into each scene, with details that the spectator would not otherwise have been aware of, impacting upon their relationship to the action, potentially increasing their level of suture because the concentration of information being presented is greater than the standard shot/reverse shot. In his opening scene Nicky is introduced to the audience before Mamie enters the café, and the caption explains that ‘Nicky is 25, oldest of three kids. He has a gun which he is realizing he left in the car. He has to pee’, de-threatening the character and changing the tone of the ensuing scene outside of the diegetic space (one wonders for his example if some of Nicky’s desperation is as a result of body functional needs). The implications and meaning of these captions change on subsequent viewings – it is later revealed that Nicky is the adopted older brother of the boy that Mamie could not abort. Note that these captions only ever complement the action and never intrude on scenes presenting important verbal or visual narration – usually action will pause (as occurs with Nicky’s introduction) or be of a humdrum nature (Mamie’s arrival at the salon) – so that the attention of the audience is still directed in a linear fashion.

The captions eventually restructure the climax, because once the narrative reaches its apparent conclusion, Roos explains the fates of the characters, sometimes years or decades after the timeframe of the film. Unlike the ‘where are now section’ of National Lampoon’s Animal House (1978) or Friday Night Lights (2004), as Victor Morton identifies there are ‘enough changes in fortune (i.e. drama) to make a whole new movie. Compressed into three minutes. And then with a coda of its own’ (Morton, 2005). It could be inferred that this is partially the result of the editing process, since as with Love Actually the first assembly was three hours long and so the director needed to ‘cut a lot of scenes when it was done’ (Lee, 2005). Like Curtis but unlike Altman, Roos appears locked into a need to complete the narrative structure identified by Todorov, even if it means increasing the plot duration exponentially. The closing montage sequence includes a flash forwards ten years to show Mamie and Charlie meeting their son possibly completing both of their story arcs but in other cases the captions present exposition that reaches even further than that - it is explained that Otis ‘watches Ted and Charley’s dogs sometimes and never plays the drums again. But in 20 years he’s happier than anyone else here. But that’s another story.’ Indeed, this whole section also allows new key character relationships to be created and their ‘happy endings’ sometimes occur because of these chance or synchronous meetings, running counter to the normal expectations of hyperlink cinema that such incidents will motivate the central action.

Protagonists.

By producing Happy Endings as an ‘independent’ film, Roos is able to make two of these characters gay without their stories being about their sexuality: ‘It’s a rare studio movie that you can talk about the things I want to talk about. You can have gay characters in a studio movie, but it has to be about them being gay, or else they’re the sidekick. […] You can’t really talk about the love life of a gay man, like I did in The Opposite of Sex (1998)’ (Cavagna, 2005). Despite Alyssa Quart’s insistence that the characters exist without hierarchy (Quart, 2005: 51) each of the three stories has a main protagonist. These are clearly Mamie and Charley; and although Roos thinks of the third as being Jude’s tale -- ‘The girl meets the boy, she changes her mind, she attaches herself to the father, she blackmails the boy to keep silent, she finds out in the meantime she’s fallen in love with the father, she’s exposed, she has to give him up’ (Cavagna, 2005) – the narrative and mise-en-scene suggest this is Otis. Jude’s appearance creates the disruption in his life and it is only when he has left that the equilibrium returns; in the scene after the band rehearsal Otis’s nervy reactions appear in relative close-up as he leans against the counter and the reverse shot is over his shoulder and the spectator enjoys his long shot point of view as Jude riffles through the kitchen looking for food, with the camera in a later seduction scene angled in such a way that Otis’s reactions are prioritised over hers. In the main, Jude appears antagonist to Otis’s protagonist.

Conclusion.

See what I mean? Apart from odd lines noting how the architecture in the houses between the characters reflects their class and how Roos has a better idea of representing diversity in his film, although it's still with a secondary character.  The key element which gave Quart and others the impression that they were watching something akin to Short Cuts must be the on-screen captions but the rest of it simply isn't akin to Crash or indeed Love Actually in how the stories are told.  But it is still a remarkable film, because of those captions, because of the performances notably from Maggie Gyllenhaal whose character, a singer, contributes to one of my favourite film soundtracks.  It's on this blog's old Forgotten Films list and although I haven't seen it recently images are stuck in my head.  Images like:



Bibliography.

Cavagna, Carlo. 2005. Interview: Don Roos. In. AboutFilm. Available at: http://www.aboutfilm.com/features/happyendings/roos.htm. Accessed: 17th July 2006.

Johnson, Tonisha. 2005. Happy Endings: An Interview with Director Don Roos, Jesse Bradford and Jason Ritter. In. Black Film. Available at: http://www.blackfilm.com/20050715/features/happyendint2.shtml. Accessed: 17th July 2006.

Lee, Michael J. 2005. Don Roos. In. Radio Free Entertainment. Available at: http://movies.radiofree.com/interviews/happyend_don_roos.shtml. Accessed: 17th July 2006.

Morton, Victor. 2005. Tone Deaf: 'Happy Endings' Can't Get It Right. In. TheFactIs.org. Available at: http://www.thefactis.org/default.aspx?control=ArticleMaster&aid=1048. Accessed: 17th July 2006.

Quart, Alyssa. 2005. Networked. In. Film Comment. 41:4.


The Origins of Auteur Theory. by Feeling Listless



Film A short introduction from Filmmaker IQ. Worth looking at this piece on Bayhem and The Wes Anderson Collection for a discussion of this within a contemporary context.


Catching Kilonovae with the Webb by Astrobites

Hopefully by now you have heard about just how awesome the James Webb Space Telescope will be (cue the George Lucas styled teaser…). In short, the Webb telescope is the successor to the famous Hubble Space Telescope — but the Webb telescope is much more than that. Using a mirror seven times the size of Hubble’s and its multi-object spectrograph, Webb will be able to study hot topics like the atmospheres of distant planets or the hectic lives of the earliest galaxies. Its capabilities seem only limited by our collective imagination — which is why scientists share their ideas for Webb as white papers on the arXiv. White papers are those which are not designed for publication in the typical scientific journals; they are a mean for scientists to informally share ideas to a wide audience. The Webb has no shortage of such papers. Today’s paper is actually a submitted article in which the authors propose the idea of marrying Webb with another major instrument: LIGO.

LIGO and Kilonovae

The Laser Interferometer Gravity Observatory (LIGO) is made of two massive Michelson interferometers which can detect extremely small spatial fluctuations using a high-powered laser. These tiny fluctuations aren’t caused by seismic tremors or LSU football games; they are actually ripples of spacetime called gravitational waves. These waves originate from a number of astrophysical events, but today we’ll be focusing on just one type known as the kilonova.

Kilonovae are a result of the collision of two compact objects, such as two neutron stars. Before the neutron stars collide, they will spiral inwards, causing massive gravitational waves. LIGO will hear this inspiral as a “chirp,” while telescopes will see this collision as a bright explosion that is about 1000 times as bright as a classic nova (hence the name “kilonova”). Due to the complex nucleosynthesis of kilonovae, these objects are brightest at longer (infrared) wavelengths of light (You can read more about that process in this astrobite.)

Unfortunately, LIGO hasn’t detected any gravitational waves yet. But have no fear! Scientists are working on a network of updates to LIGO to create Advanced LIGO which should be 10 times more sensitive to gravitational waves. Advanced LIGO will be able to detect kilonovae as far as 1 billion light years away. There is one problem: Advanced LIGO will not be able to constrain the location of the kilonova very well — only to about 10 square degrees, or 1% of the sky. If we want to follow up LIGO detections with telescope observations, we need to search this area of the sky efficiently because kilonovae will dim dramatically a week or so after their explosion. If kilonovae were as bright as supernovae, this would be an easy task: just look for the brightest thing on that patch of the sky. Unfortunately, kilonovae are actually fairly dim; they only peak at around 24th magnitude in the optical (although they are 1000 times brighter in the infrared).

James Webb

Figure 1: Cartoon of the James Webb searching only a few galaxies in LIGO’s detection beam to find a kilonova.

 

 

Webb to the Rescue

The authors of today’s paper argue that the Webb telescope is a great tool for finding the source of LIGO’s detections, as shown schematically in Figure 1. They offer a few, interconnected reasons of why we should use the Webb. First, Webb will have a near infrared camera. As previously mentioned, the infrared is approximately where kilonovae should be brightest. Additionally, because Webb hosts a huge primary mirror, it is capable of detecting a kilonova within a two second exposure. Figure 2 shows this exposure time as a function of filter and days since the kilonova explosion. Finally, Webb is equipped with an array of microshutters, which can be used independently. This drastically cuts back on the time it takes the Webb to read out images from its camera.

Figure 1: Webb can detect kilonovae quickly. This plot shows the total integration time needed to detect a kilonovae ~1 billion lightyears away as a function of the time since explosion. Within a week of the explosion, the IR camera of Webb can detect these kilonovae in less than 2 seconds.

Figure 2: Webb can detect kilonovae quickly. This plot shows the total integration time needed to detect a kilonova ~1 billion light years away as a function of the time since explosion. Within a week of the explosion, the IR camera of Webb can detect these kilonovae in less than 2 seconds.

This all sounds great, but there is one major setback. Webb’s slew time (or the time it takes the telescope to change positions between observations) is very slow. In fact, if you naïvely used the telescope to observe the entire 10 square degrees in question, it may take over 50 hours to find the explosion!

Bartos and collaborators suggest taking a systematic approach to searching to the kilonovae. Previous studies have shown that kilonovae often occur in places with many stars forming (or galaxies with a high “star formation rate”). These star-forming galaxies often have strong H-alpha emission from the hot ionization of atomic hydrogen. Simply put, this means that galaxies with strong H-alpha emission are more likely to host kilonovae. Therefore, if we limit our Webb search to just known galaxies with substantial H-alpha emission within 1 billion light years of Earth, we can reduce the time to survey the 10 degree squared region to just 2-3 hours, depending on the number of galaxies.

With such a short time commitment, the Webb telescope will be able to follow up many LIGO targets over multiple nights, giving us unprecedented access to kilonova light curves and statistics. Hopefully, these observations can help us better understand the neutron-rich atoms (r-process atoms) and the equation of state for neutron stars and other compact objects.


What to expect when you're expecting a flyby: Planning your July around New Horizons' Pluto pictures by The Planetary Society

As New Horizons approaches Pluto, when will the images get good? In this explainer, I tell you what images will be coming down from Pluto, when. Mark your calendars!


Interview with an SLS Engineer: How Booster Test May Help Drive Golden Spike for Interplanetary Railroad by The Planetary Society

With NASA and Orbital ATK preparing for an important test tomorrow in Utah, an SLS engineer describes the inner workings of the vehicle's new solid rocket boosters.


March 10, 2015

"CSN without the Y" by Feeling Listless

Music On the occasion of the loss of filmmaker Albert Maysles, Diffuser looks at his work on concert films and more widely how there's a huge difference between a standard recording of a set and what's achieved when someone with a sense of narrative and wider context is involved. There are some interesting nuggets throughout, like this on Woodstock:

"Michael Wadleigh’s Woodstock documentary makes similar historical omissions for various reasons. Turning three days of peace, love and music into three hours of cinema mandates that some artists simply aren’t going to make the final cut. Additionally, technical issues seriously affected footage of some bands, and in a couple of cases artists didn’t want anything to do with the film.

"Neil Young stands as the most famous of the latter group. Crosby, Stills and Nash’s acoustic set remains a highlight of the movie, but their electric set with their fourth member is lost to the ages because Young refused to play with Wadleigh’s camera crew on stage, feeling that they were too invasive. As a result, the Woodstock mythology remains cemented as a wonderful night for CSN without the Y."
The BBC's Glasto coverage ever expands but even then some bands find themselves omitted because the BBC doesn't happen to be covering a given stage. Or as with Young, the band decides they don't want to available on iPlayer for the following month.


Wishing for an Interactive Math, Programming & Physics Learning Environment by Albert Wenger

It is now six years ago that we at USV held our one day mini conference on “Hacking Education.” A lot has happened since then. We have made investments in Codecademy, Duolingo, Edmodo and Skillshare. MOOCs really took off with classes that have been taken by tens of thousands of students. Susan and I started homeschooling for our children. And yet when I think about what is possible in theory it still feels like we are early on.

One of the things that I envision is an interactive learning environment that seamlessly combines math, physics and programming (and possibly a lot more). Imagine that when you learn about coordinates, you have a coordinate system where you can either drag a point around and see its coordinates change or change the coordinates and watch the point move. And right there and then you can add some code (e.g. a loop) to animate the point by programmatically changing its coordinates. You can then see how that could quickly be used to show the trajectory of a ball that is thrown in the air.

Ideally, these learning objects would be fully embeddable inside other content (e.g., inside this blog post) and would also allow links to be attached inside of them, for instance a link out to the wikipedia entry on Cartesian coordinates. If these components themselves were open source with a permissive license then people could not only use them any which way they wanted but also contribute their own components and help extend and further develop existing ones.

The learning system itself would then take care of presenting various paths through knowledge and providing continuous testing of knowledge to wind up with a completely individualized learning experience (much like Duolingo provides for language learning). The system would know what you have mastered, where you need re-enforcement and offer you potentially new things to learn. It would also begin to understand whether you learn better by reading a text or watching a brief video. And if you don’t grok a concept in one presentation it will surface other ones for you automatically.

One of the closest things to the level of interactivity I envision is the incredibly well done Desmos graphing calculator that’s available on the web and as apps. At present its components seem proprietary though and it is not a full on learning system yet. But if you graph a line for instance, all you have to do is enter y = mx + b and it automatically provides you with sliders to adjust the value of the slope m and intercept b. This shows nicely what is possible. None of what I have described above cannot be built. If this is something you are working on or interested in working on I would love to hear from you (including connections to the Desmos team).


Our Programs Are Fun To Use by Jeff Atwood

These two imaginary guys influenced me heavily as a programmer.

Instead of guaranteeing fancy features or compatibility or error free operation, Beagle Bros software promised something else altogether: fun.

Playing with the Beagle Bros quirky Apple II floppies in middle school and high school, and the smorgasboard of oddball hobbyist ephemera collected on them, was a rite of passage for me.

Here were a bunch of goofballs writing terrible AppleSoft BASIC code like me, but doing it for a living – and clearly having fun in the process. Apparently, the best way to create fun programs for users is to make sure you had fun writing them in the first place.

But more than that, they taught me how much more fun it was to learn by playing with an interactive, dynamic program instead of passively reading about concepts in a book.

That experience is another reason I've always resisted calls to add "intro videos", external documentation, walkthroughs and so forth.

One of the programs on these Beagle Bros floppies, and I can't for the life of me remember which one, or in what context this happened, printed the following on the screen:

One day, all books will be interactive and animated.

I thought, wow. That's it. That's what these floppies were trying to be! Interactive, animated textbooks that taught you about programming and the Apple II! Incredible.

This idea has been burned into my brain for twenty years, ever since I originally read it on that monochrome Apple //c screen. Imagine a world where textbooks didn't just present a wall of text to you, the learner, but actually engaged you, played with you, and invited experimentation. Right there on the page.

(Also, if you can find and screenshot the specific Beagle Bros program that I'm thinking of here, I'd be very grateful: there's a free CODE Keyboard with your name on it.)

Between the maturity of JavaScript, HTML 5, and the latest web browsers, you can deliver exactly the kind of interactive, animated textbook experience the Beagle Bros dreamed about in 1985 to billions of people with nothing more than access to the Internet and a modern web browser.

Here are a few great examples I've collected. Screenshots don't tell the full story, so click through and experiment.

Feel free to leave links to more examples in the comments, and I'll update this post with the best ones.

(There are also native apps that do similar things; the well reviewed Earth Primer, for example. But when it comes to education, I'm not too keen on platform specific apps which seem replicable in common JavaScript and HTML.)

In the bad old days, we learned programming by reading books. But instead of reading this dry old text:

Now we can learn the same concepts interactively, by reading a bit, then experimenting with live code on the same page as the book, and watching the results as we type.

C'mon. Type something. See what happens.

I certainly want my three children to learn from other kids and their teachers, as humans have since time began. But I also want them to have access to a better class of books than I did. Books that are effectively programs. Interactive, animated books that let them play and experiment and create, not just passively read.

I want them to learn, as I did, that our programs are fun to use.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.


A Record-Breaking Runaway Star by Astrobites

Title: The Fastest Unbound Star in our Galaxy Ejected by a Thermonuclear Supernova
Authors: S. Geier et al.
First Author’s Institution: European Southern Observatory
Status: Published in Science

You’ve probably noticed the action movie trope of “cool guys walking away from explosions”. The authors of today’s paper argue that this cinematic cliche is playing out on a scale much grander than Hollywood. The hero of this cosmic blockbuster is a sub-dwarf O-type star 8.5 kpc away in our galaxy’s halo. The explosion was a supernova in the Galactic disk which may have detonated 14 million years ago.

The vast majority of stars in the Milky Way are orbiting on roughly circular trajectories around the center of the galaxy. Starting in 2005, astronomers have found a handful of stars that don’t obey this rule. These so-called hypervelocity stars are speeding away from the galaxy at such high velocities that the galactic gravitational potential will not be able to stop them. These stars will leave the Milky Way entirely and travel off into intergalactic space. Many of these stars are on trajectories originating near the center of the galaxy, as predicted by Hills in 1988, and were likely flung outward in an interaction between a binary star system and the central supermassive black hole. The authors of today’s paper focus on the fastest of the hypervelocity stars, called US 708.

Screen Shot 2015-03-09 at 9.31.10 PM

Figure 1: Spectra of US 708 showing the redshifted absorption lines due to the star’s large radial velocity. The lines are themselves broadened due to the fast rotation of the star.

 

To determine the trajectory of US 708 in space, the authors first measured the radial velocity of the star along our line of sight using spectra. Measuring the Doppler shift in absorption lines coming from the star (Figure 1) , the authors found that US 708 is moving away from us at 900 km/s. But this is just the motion along the line-of-sight. To solve the full motion of the star through space, the authors measured its proper motion using images obtained over 50 years from the Digitized Sky Survey, Sloan Digital Sky Survey, and PanSTARRS. The measured proper motion in shown in Figure 2. Together, these observations indicate that the star is moving at about 1200 km/s away from the disk of the Milky Way. Now we know where the star is going, but where did it come from, and how was it ejected from the galaxy?

Screen Shot 2015-03-09 at 9.31.31 PM

Figure 2: The right ascension (top) an declination (bottom) of US 780 over a 54 year period, taken from various imaging surveys. The black lines are the best-fit, representing the observed proper motion. The red lines show the proper motion necessary for the star to have originated in the galactic center.

 

Where in the galaxy did this runaway star come from? In Figure 3, the likely region of the galactic disk is shown. Using a model of the galactic gravitational potential, the authors run dynamical simulations of the star’s past trajectory, and determine where the star intersects the disk of the Milky Way. These simulations show that US 708 did not originate near the central supermassive black hole, but was most likely ejected from the disk about 8 kpc from the center. This rules out the traditional ejection mechanism – US 708 was not hurled out into space by the galaxy’s black hole. So how did this star get its hyper velocity?

Screen Shot 2015-03-09 at 9.30.18 PM

Figure 3: Results of Monte Carlo simulations of the past trajectory of US 708. The view is top-down on the galaxy. The black dot marks the galactic center, the star marks the location of the Sun and the triangle marks the projected position of US 708. Each colored pixel represents the number of simulated paths that passed through that point in the galactic disk. None of the simulations took US 708 anywhere near the galactic center, ruling out an interaction with the central black hole as the ejection mechanism.

The authors propose that US 708 was part of a binary system, orbiting closely around a white dwarf. The two stars were close enough that the white dwarf pulled material from US 708 and eventually detonated in a Type Ia supernova. The supernova released US 708 from its tight orbit, and US 708 flew along a straight line with the same speed at which it was orbiting. The closer the binary system, the faster the ejected star. The observed velocity of US 708 indicates that this progenitor binary system had an orbital period of only 10 minutes! Another hint that US 708 originated in a close binary its rapid rotation. Take another look at the spectra in Figure 1. The absorption lines are broadened due to a rotation velocity of about 100 km/s, which makes US 708 by far the fastest rotating runaway star known. In a close binary, the rotation of each star is synchronized with their orbits, known as tidal locking. As we have seen, the orbital velocity of this system is very large, so the ejected companion will be rotating quickly.

If a supernova was responsible for ejecting US 708 from the galaxy, this hints at an unusual flavor of Type Ia supernovae. Traditionally Type Ia supernovae are thought to be formed when a white dwarf accretes mass from a red giant, exploding when it reaches the Chandrasekhar limit. However, US 708 is a compact sub-dwarf O-type star, which is similar to the Helium core of a red giant that has been unearthed from the puffy outer Hydrogen layers. Thus, US 708 would have donated Helium onto its white dwarf companion, which can detonate the white dwarf well below the Chandrasekhar limit. This type of detonation may be one of the explanations for the recently observed Type Iax supernovae.

If US 708 was ejected from the galaxy by a Type Ia supernova, a closer study of its composition could help to unlock the origins of these cosmic explosions which cosmologists rely on as standard candles. The nearby supernova may have polluted the surface of US 708 with heavier elements, warranting a closer spectroscopic study. Soon, missions like Gaia are expected to greatly expand the number of hypervelocity stars known, perhaps uncovering more like US 708 that have been ejected from the galactic disk. Until then, look for US 708 at a theater near you this summer; this star was walking away from explosions 14 million years before it was cool.


LightSail Arrives in Florida; More Launch Details Revealed by The Planetary Society

A cadre of CubeSats including The Planetary Society’s LightSail spacecraft completed a cross-country journey to Florida, where they await installation aboard an Atlas V rocket.


A Sky Full of Stars by The Planetary Society

In pictures of the planets, the stars aren't usually visible. But when they do appear, they're spectacular.


March 09, 2015

The Doors in New York City. by Feeling Listless

Photography From npr, the kind of story which doesn't work on the radio:

"Between November 1975 and September 1976, a man named Roy Colmer decided to photograph New York City's doors. Not all of New York City's doors. No doors in particular. And in no real particular order. But his aptly named Doors, NYC project amounted to more than 3,000 photos, which now live with the New York Public Library.

"If you're like me and want to obsessively look at every single one, the best way to do that is here. But then, I did that so you don't have to. Firstly, note the door on the bottom left. For every dozen-ish nondescript doors, you'll find a little treat — like a poster of a cat ..."
My favourite? The one which says boldly, "CLOGS OF COURSE" because of course, clogs.


March 08, 2015

What made me a feminist? by Feeling Listless

Life Why do I think the way I do?

This isn't the first time I've wondered this and since it's International Women's Day, I thought I'd attempt to trace through my memory to try and work out why I think the way I do about, well everything.

Have I always thought this way?

Quite honestly I don't know.

The only reason I'm asking is because there seem to be so many people who for some reason don't and I feel sorry for them and don't unlike them think they're entitled to have that opinion, not in 2015.

I do have some memories.

My first two best friends were girls, I think because their Mums were my Mum's best friends. I have photos of a birthday party at about the age of five, the three of us sitting around the birthday cake.

At primary school, I remember the story books and later on text books featuring the usual gender roles. Men went out to work. Mothers were housewives.

All my real friends were girls. I have vivid memories of sunny break times sitting in grass and making daisy chains when all the rest of the boys played football. I had friends who were boys but it wasn't same. I didn't like football.

That hasn't really changed. I find women much easier to talk to than men.  I still don't like football.

Except when I began secondary school, it was an all boys school and just as puberty hit, I lost the ability to talk to girls. I'd get nervous. Odd. All my friends were male for years.

Then girls arrived in the sixth form and they were utterly brilliant and thought so even though I couldn't speak to most of them.

There was also the moment at university at a hall formal, which was at a hotel, stuck on a toilet overhearing two blokes at the urinals outside referring to potential conquests as "the blonde one" and "the ginger" and wincing and wishing to god I'd never be anything like them.

I was also bullied a lot at school which has led to a dim view of any kind of oppression.  Gender, race, anything.

Is any of this really relevant? I don't know. Probably not.

But what I'm trying to say is that I can't remember the moment when I became a feminist or at least thought women should have the same rights as men. There's no one thing which made me "get it".

I've just always thought so and can't understand why anyone wouldn't.

Is this unusual? I don't know that either.

People just have the experiences they have I suppose. I was reading Woman Woman comics at an early age. Watched a lot of Star Trek: The Next Generation as a teenager and I expect a lot of my liberalism can be traced back to that. Reading Shakespeare's Measure for Measure and Chaucer and being shocked at the treatment of women in those by societies of the past. Listening to a lot of female singer songwriters dealing with their experiences through lyrics. Tending to identify with female protagonists in films more than men.  Reading The Guardian's Woman pages.

See what I mean?  It's all a bit woolly.

If anything it's become even more focused this past few years, thanks to social media, reading feminist writing, watching my way through this and the general sense of injustice but knowing full well I'm not the right gender to really understand what it's like to live within a patriarchal society, or as I've taken to calling it "the fucking patriarchy".

I have no answer.  So I'll just be pleased that I can see it and hope that someday everyone will.


Evolution, diffusion, hype cycle and early failures. by Simon Wardley

With mapping, one of the axes is evolution (the other being value chain - I've written an introduction on Wardley Mapping which covers the basics.).

One axis describes what something is (the value chain, from the user needs to the components required to meet it through a chain of needs). The other axis (evolution) describes change.

I've posted before on how the evolution axis was determined but when exploring this subject, I didn't start with the evolution axis as I had to discover it. In the very early days (2000 - 2004), I tried all sorts of different forms of measuring change. All failed.

The evolution axis (see figure 1) has a number of specific properties. It is :-

Figure 1 - Evolution



1) Universal. It appears to apply to all things that undergo competition whether activities, practice, data or knowledge. 

2) Causation. It is caused by the interaction of demand and supply competition. Without competition then it doesn't occur. It also doesn't require the need for a crystal ball as time is abolished from the axis. Evolution is measured over ubiquity vs certainty.

3) Measurable. The evolution of any component can be measured using a combination of publication types and how widespread the component is in a field. Unfortunately, only the past can accurately be measured which means the component has to become defined on the certainty axis (i.e. a commodity) before its past can be fully described. Until that point, there is an increasing element of uncertainty with position as the component becomes more uncertain (i.e. unknown).

We have no crystal ball and cannot predict precisely when things will evolve (I had to abolish time to create the evolution curve) however we can test evolution through the measurement of the past and secondary effects (i.e. other consequences caused by evolution). We can also use weak signals to anticipate evolutionary change.

4) Usefulness. There is a long list of reasons why mapping across the evolution axis is useful.

5) Consistency. There is only one evolution path. The evolution of one activity can be directly overlaid on the evolution of another. There is no need for a vague 'time' axis where 'time' is not a measurement but a direction of travel. 

6) Direction. A thing evolves in a single direction, i.e. the past is behind it, the future is ahead. It doesn't start evolving, become a product and then suddenly jump back into genesis as an unknown thing. When examining complex products, what we find is objects like a 'smart phone' are actually a grouping of many underlying components.

7) Recursive. Evolution can be equally applied at a high level (e.g. a specific business activity) or at a low level (e.g. a component of an IT system).


The early days

Before discovering the process of evolution, I did try to map with all sorts of things. These were inevitably failures. I thought I'd go through a couple and explain why.

Diffusion Curves.

Diffusion Curves (as per Everett Rogers) are extremely useful in many contexts. They are measurable (adoption vs time) but unfortunately the time span is inconsistent between different diffusion curves. If you overlay multiple diffusion curves on exactly the same time axis then you get multiple different curves. They also lack direction i.e. the diffusion of a particular phone A1 (from early adopters to laggards) is followed by the diffusion of a better phone A2 (from early adopters to laggards) which is followed by the diffusion of an even better phone A3 (from early adopters to laggard).  ... and so on. Hence the evolution of an activity is seen as a constant set of jumps back and forth rather than a direction of travel and the future of an act can be both ahead and behind it on the map. I've compared a map based on evolution (see figure 2)  with a map based upon diffusion (see figure 3).

Figure 2 - Map based upon evolution, direction.



Figure 3 - Map based upon diffusion, direction.


Hence from a mapping perspective, this makes diffusion curves useless. However, don't confuse that with diffusion curves being useless, in other contexts they are extremely useful.

Hype Cycles

Hype cycles are extremely useful in many contexts. They are not however based upon any physical measurement but instead are aggregated opinion. This means they are not testable, so we just have to accept them at face value (even when they do change the axis from visibility to expectations). The other axis is time (though it used to be maturity).  Time, in this case, is more of a vague direction of travel rather than any measurement of time. It also suffers from a lack of direction in the same way diffusion curves do. 

In figure 4, I've added four different hype cycles from 2002, 2005, 2010 and 2013. First thing is the axes have changed from visibility vs maturity to expectation vs time whilst the curve has remained the same. I even have a couple with expectation vs maturity.  This doesn't matter. The hype cycle isn't based upon measurement of some physical property, it's based upon opinion, so this is fine.

However, we have software as a service / ASP being in the slope of enlightenment in 2005 and cloud computing being in the peak of inflated expectations in 2010. Now, this all depends upon opinion and whatever definition they wish to choose. But, as with diffusion there's no direction i.e. one thing doesn't evolve into another but rather one thing in the slope of enlightenment becomes another related thing in the peak of inflated expectations. So when it comes to mapping rather than compute (as a product) evolving to include compute (as a rental service) then evolving to compute (as a commodity) including compute (as a utility), with the hype cycle you have the same back and forth that you have with diffusion curves.

Figure 4 - Four Hype Cycles



As I found out long ago, the hype cycle in the context of mapping is useless (as with diffusion curves). Don't confuse that with the hype cycle being useless itself. In many contexts as a source of aggregated opinion, then it is very useful. 

I looked at many techniques to measure change and found all of them wanting. I spent years finding out that lots of things weren't useful for describing evolution. This is why I spent so long in the British Library cataloguing many thousands of publications. There was no effective means of describing the process of evolution until I'd done this work and found a process that seemed to work.

But then that's the point, diffusion curves measure diffusion, hype cycles provide an aggregated opinion on hype, whilst the evolution curve measures evolution (it doesn't measure either diffusion or hype). All have their purpose.

When it comes to mapping and improving situational awareness then you need to focus on what an organisation is (the value chain) and how the components of the value chain are changing (evolution). Does this mean the evolution curve is right? No, of course not. It's just the best model at this moment in time for examining evolution and as such it's an essential part of mapping.


--- 13th March 2015

Based upon Henry's question (below), I've added a graph to show the link between diffusion curves (Adoption vs Time) and Ubiquity.

The first graph shows adoption curves for different instances of an evolving activity A [through various instances A1 to A6], but rather than simply add adoption as the Y axis, I've added applicable market. The reason for this is adoption is to an applicable market and the market sizes for each evolving act are different.

Figure 5 - Diffusion Curves (adoption vs time). 


I've then added where each evolved instance of the act A is in the evolution curve.

Figure 6 - Evolution



[The two graphs above are purely illustrative. I've taken a few liberties to make the transition less complex.] 

Now, a couple of things are worth noting.

Ubiquity is a measure of how ubiquitous something is. In order to determine it, you need to know the end point i.e. when the activity has become a well established commodity. This point of ubiquity can only be accurately estimated when the act has become certain. At this point the past history can be graphed.

I have to emphasise, that the accuracy at which you can determine where something is or where something was on the graph increases as the act becomes more certain i.e. a commodity. If the act is not a commodity then where it currently is on the ubiquity axis becomes increasingly uncertain the newer the act is.

For this reason, you cannot when something novel appears (e.g. the genesis of electricity with the Parthian Battery in 400 AD) determine how ubiquitous electricity will become some 1600 years later. No-one can. This requires a crystal ball.  However by 1960s, you've a pretty good idea how ubiquitous electricity is and can graph the past.

The certainty axis is determined from publication types (see figure 7). In particular, it is a relative measure of type II and type III publication types.

Figure 7 Publication types.



To graph the past, you need to first use the publication types (used to create the certainty axis) to determine if something has become an established commodity. You can then determine the point of ubiquity (what I call the reference point) and use this to determine how ubiquitous something was. You can then plot ubiquity against certainty in the past. An example of this covering a range of entirely different activities is provided in figure 8.

Figure 8 - Ubiquity vs Certainty.


You can then overlay the different areas where different states (custom built, product etc) dominate which gives figure 1 - the evolution curve at the beginning of the post.

To graph the future, you can't. The best you can do is to make a best guess at to where something is on the curve. Which is why with mapping I use broad categories - genesis, custom built, product etc. I get groups of people (with experience of the field) to estimate how far along it is. The accuracy of this will increase as the act becomes more certain.

To see evolution then you need to abolish time from graph (i.e. there is NO way to measure evolution over time in any form of repeatable pattern). However, whilst things will evolve over some (unspecified length of) time, you cannot accurately demonstrate where it is the evolution curve until it has become a commodity. Then you can accurately demonstrate where it was.

I cannot emphasise this more. The future is an UNCERTAINTY barrier which we cannot peek through. The more UNCERTAIN something is (i.e. genesis, custom built) the less able we are to see the end state. ONLY when something is close to being CERTAIN can we see the point of ubiquity (the reference point).

I do see people draw diffusion curves and start claiming things about them like here is a commodity, this end is an innovation. Alas, there is no repeatable graph of evolution over time.

We do NOT have a crystal ball. No-one does.

Diffusion <> Evolution. Evolution consists of many diffusion curves and there are many chasms to cross. The two are not the same, mix this up and you'll get lost.

Despite lacking any crystal ball, we can however say that something will evolve from genesis, custom built to product (+rental services) to commodity (+utility services). It will become more common and more certain. This change is driven by supply and demand competition. This is what figure 1 shows.


--- Just for Fun (13th March 2015)

Oh, and just because I like a good debate. I'll leave you with this graph of progress. Whilst being efficient with energy is good, anyone who thinks we can somehow reduce energy consumption permanently whilst maintaining progress needs to think again. Whatever we do, eventually if progress continues our energy consumption will exceed today. If you want to solve the energy related issues then you need to look for less damaging forms of production. Competition (whether life or business) is a constant exercise of reducing entropy within the local system through the consumption of energy.



Ubiquity vs Adoption (16th March 2015)

I thought this was fairly obvious but it seems there is still some confusion. I need to clarify.

100% Adoption of the iPhone 1 <> 100% Adoption of the iPhone 6. The markets are different.

Q. But can't we look at the smartphone market? 

Well, what do you measure against?

Q. When it's ubiquitous?

Define ubiquity

Q. When everyone has one?

So, gold bars aren't ubiquitous because not everyone has one? Gilts aren't ubiquitous because not everyone has one? CRM systems aren't ubiquitous because not everyone has one? Coffee beans aren't ubiquitous because not everyone has one? Computer servers are not ubiquitous because not everyone has one?

Except ... Gold bars, Gilts, CRM, Coffee beans and computer servers are all ubiquitous in their markets. It's just their markets are very different.

Q. But how do you convert adoption to ubiquity?

You need to find the point of ubiquity in a market. To do this you need to look at publication types and work out when something is a commodity. Then you can find the point of ubiquity and plot back how ubiquitous something was vs how certain (i.e. well defined, well understood) something was.

You cannot simply take adoption of something in its market and call that ubiquity. You cannot jump from an Everett Roger's diffusion curve to an Evolution curve. Just because they both happen to have an S-Curve shape doesn't make them the same thing.

Q. But that's what everyone does!

Well, they are all wrong.


Greg's Alarm Clock by Simon Wardley

Back at EuroOSCON in 2006, I gave a talk on commoditisation covering the Web of Things (what a group of us used to call the Internet of Things before IoT took over) and 3D printing. In that talk, I discussed Greg's Alarm Clock created by Greg McCarroll (who has since sadly passed away).

Greg's clock was in my opinion one of the first really useful and smart devices.

The alarm clock which consisted of a mix of lego and other components was just a prototype concept. It did one very simple thing. Before waking you up, it checked a web service with the local train station to determine if your train was delayed or cancelled. If the train was, then the clock reset itself to get you up in time for your next train. If it calculated you were going to be late for work, it sent an email to work informing them on your behalf.

All this happened whilst you were sleeping. 

This is how 'smart' things should work. I don't find much use in switching on my washing machine with a phone app. I prefer devices that understand the context, the environment and can operate on my behalf, a subject I covered in a previous talk called 'Any Given Tuesday'. 

I don't want my future devices littered with a sprawl of apps to control lighting, heating and the temperature of my fridge. I want my devices to ask the questions and take care of the details.


March 07, 2015

So, well, Eurovision UK 2015. Yeah. by Feeling Listless



Music Tonight while I was watching a double bill of Ozon's Jeune & Jolie and the Melanie Lynsky starring divorcee indie Hello I Must Be Going both of which were telling similar stories in different ways, the BBC announced this year's UK entry for the Eurovision. Having tried viewers votes for the actual songs and artists, and then just the songs having chosen the artists, and then neither, we're now at point where they have some kind of open cattle call and some lyrics are given to some singers at the end. In other words, here you, cheer this on then. Here it is above. Some notes:

(1) It's really weak. I'm quite the fan of 20s & 30s revivalism especially when The Puppini Sisters and Christina Aguilera have tried it, but this doesn't take us much further than The Doop and Doop and that was 1994. Plus it's for the Eurovision in an anniversary year and we're witnessing a total loss of confidence. After fielding a half decent record last year which came 17th they've clearly decided to on the "fuck it let's just do novelty" strategy on the assumption we'll probably come mid-table again anyway but at least there'll be a solid reason for it other than Europe hates us.

(2)  The bullshit gender politics in the lyrics.  Especially from the second verse onwards.  See below.  It's all about the man telling the woman how to deport herself while he's not in her company and her submissively agreeing.  Which might have been acceptable writing a hundred years ago, but not now.  Not now at all.

Him:
"Some younger guys, with roving eyes, may tantalise you with their lies, you must be wise and realise, leave well alone 'til you get home, dear."
Her:
"Won't see other fellas, won't make you jealous, no need to fear when you're not here, I'm still in love with you."

Him:
"Don't walk on the red light, don't stay out at midnight, don't get in a fist fight, that pretty face can't be replaced."
Her:
"Won't be out at night hon, it wouldn't be right hon, no need to fear when you're not here, I'm still in love with you."

Her:
"Don't make a fuss you have to trust, this is how it always must be, when I stop to think of us, I can assure you, I adore you."
Him:
"God you're so gorgeous, no need to be cautious, take good care when I'm not there, I'm still in love with you."

Him:
"You have a fun time, soak up that sunshine, but don't drink too much wine, just one or two will have to do."
Her:
"I know what you're thinking, so won't be drinking, no need to fear, when I'm not here, I'm still in love with you."
I'm willing to admit the odd word may have been poorly transcribed - I'm not sure in particular about that final line - but on the whole I don't understand any of this anyway.  Unless she's simply telling him that and just doing what she sweet wants anyway?  I hope so.  Not that you can tell from the performance.  If I'm off base on this do tell me.  It's not Doctor Who's The Caretaker, but it feels pretty close.  Minus points to for the official BBC press release not including the lyrics which are supposed to be half the point.

(3)  The music is a confused mish-mash.  Perhaps having sensed that previous performers from other countries have run aground when actually just keeping within genre they've decided to lather this thing in ill fitting electronica and country riffs which simply confuse the whole business.

(4)  Here are the biographies of the writers:

David Mindel has had a successful career in song-writing, working with the likes of Olivia-Newton John, Barry Manilow, The Shadows, John Travolta, Mud and Musical Youth to name but a few.

Following a successful song writing career, David embarked on a new chapter - writing and recording some four thousand TV and radio commercials, including penning the themes for BBC One’s National Lottery and Euromillions TV shows.

Adrian Bax White is a classically trained multi-instrumentalist whose eclectic music career has spanned pop to fusion jazz with everything in between. Adrian has worked with a multitude of singers from multi Grammy award-winners such as John McLaughlin and Narada Michael Walden to underground indies such as Blue Orchids and acid jazz godfather Lonnie Liston Smith.

Mindel's IMDb expands on some of the other tracks he's worked on:  Bob's Weekend, Coogan's Run, Challenge Anneka, The District Nurse, Real Life, The Hot Shoe Show, I Get Your Act Together, Rory Bremner, Who Else?, Harty, Food and Drink and Jim'll Fix It.  Some of which I used to love.

(5)  In a world where Taylor Swift's 1989 exists, where even I'll admit, glancing at the UK top 40 demonstrates there's some really interesting, ballsy pop music in production, the BBC and whoever have seen fit to choose this which says nothing about the British music industry or the state of the art.  Once again we're treating Eurovision and a "fun party" and "nothing to serious" and "a joke" when it could be a celebration of who we are and what our music industry is.  Which I know we already do across the world with the "real" music, but wouldn't it have been amazing if Scott Mills had introduced something tonight and our reaction would have been the collective awe of hearing "Shake It Off" or "Let It Go" or "Happy" or "Fireworks" or whatever Beyonce's doing this week, something with a push and a donk on it instead of hearing this and trying to rationalise which judging by the social media even people who're generally supportive are doing?

(6)  I just don't like it, ok?  Sigh.


Hi Jim ... by Simon Wardley

A response to Jim's Cloud Post.

---

Hi Jim,


Not quite how I remember the conversation. Factors involved in adoption - efficiency in provision, increase in demand (price elasticity effects, long tail of unmet demand, evolution of higher order systems), rate of innovation of new services (ecosystem effects), ability to take advantage of new sources of wealth (development speed, reduced cost of failure), inertia (16 different forms) & competitor actions. When talking about the shift of infrastructure from product to utility then all these factors come into play. 

Efficiency of Amazon's provision. Do remember that since IT is price elastic and Amazon has a likely constraint e.g. acquiring land and building data centres then Amazon will certainly have to manage its pricing carefully i.e. if it dropped pricing too quickly then demand could exceed supply. So, you need to consider future pricing as well. In all likelihood AWS EC2 is operating at 60%+ margin but this will reduce over time.

Increase in demand. One of most amusing cloud 'tales' is the one that it'll save money. Infrastructure is a million times cheaper today than 30 years ago but has my budget dropped a million fold in that time? No. We don't tend to save money, we tend towards doing more stuff. This is Jevons Paradox. What we need to be mindful of is that our competitors will do more stuff. Which is why you need to be careful about future pricing. If your IT budget is 2% of total budget and your competitor has a 10x advantage then you might shrug it off as a small part of the budget. But, your competitor is likely to end up doing more stuff and suddenly (just to keep up) you'll find you're spending a lot more than 2%.

Rate of Innovation of new service. There are numerous ecosystem games to play in a utility world (such as ILC) which enables a provider to simultaneously be innovative, efficient and customer focused. This seems to be happening with Amazon as all three of those metrics appear to be accelerating. This provides direct benefit for the users of that environment in terms of new service release.

Ability to take advantage of new sources of wealth. Key here is speed and reducing the cost of failure both of which a utility provider offering volume operations of good enough but standard commodity components provides.

Inertia. We all have inertia to change (loss of previous investment, changes in governance / practice, loss of social capital, loss of political capital etc - 16 different forms in total). There will be a counter to any change

HOWEVER ...

The competitive pressure to adapt are often not linear but have exponential effects. If an adaptation gives competitors greater efficiency, increasing access to new services, increases their ability to take advantage of new sources of wealth then as more of your competitors adapt then the  pressure on you mounts. This creates the Red Queen Effect (prof van. valen). 

As a result these forms of change are not linear but exponential. It can take 10 years for such a technology change to reach 3% of the market and a further 5 years to hit 50%. Because of inertia to change and due to its non linear nature many companies (especially competing vendors) get caught out. However, in all such markets there are usually small niches that remain. 

There is also no reason why commoditisation has to lead to centralisation. Many of the forces can be countered. Unfortunately due to the incredibly sucky play of often past executives within competitors then in this case centralisation (to AWS, MSFT and Google a distant third) seems very likely. Some of those past executives were warned in 2008 about how to fragment the market by creating a price war with AWS clones forcing demand beyond Amazon's ability to supply due to the data centre constraint. It's shocking that they were so blinded that they've got large companies into this state.

So, 

1) Will infrastructure centralise to those players of AWS, MSFT and GooG? Yes plus clones of those environment. Competitors have shown pretty poor strategic play in the past and this is now the most likely outcome.

2) Will everything go to public infrastructure clouds? No. There will be niches. There is also inertia to the change but the pressure will mount (Red Queen) as competitors adopt cloud. The change usually catches people out due to its exponential nature.

3) Is it just price? No. Multiple factors involved. Price is one of those factors.

4) Why is Walmart building a modest sized private cloud? Probably because it's concerned over Amazon's encroachment into its own retail industry. In all likelihood they will end up adopting Azure or GooG over time.


Let the Towers burn ... by Simon Wardley

In this post I'm going to cover some issues with the SIAM Tower model. I'll use mapping to do this.

Wardley Mapping is a technique that I developed out of utter frustration with IT and the business. I've used it for over a decade with continued success. It's based upon two things. Firstly a value chain (derived from user needs) which describes an organisation. Secondly, evolution which describes the process of change. The two are essential because there's not point in try to deal with an organisation 'as is', you need to also consider 'where things are heading'.

The evolution axis wasn't developed by sitting in a room thinking up what might be a good idea in a powerpoint presentation. It took over six months of intensive collection of many thousands of data points just to test it. In total the mapping technique took over ten years to create (1995-2005) and several more years to test the validity of the axis.

I'm going to start with an example of a map, an early draft from HS2 (high speed rail) and then compare the approach. This is what a Wardley map looks like (see figure 1)

Figure 1 - a Wardley Map.


Couple of things to note with the map. At the top are the systems that provide the user needs. Underneath is a chain of needs i.e. components that the higher order systems require. Each component (or node) is positioned according to how evolved it is. 

What might not be is obvious is that each component could be an activity, practice, data or piece of knowledge. All components evolve through the same mechanism and as they evolve their characteristics (i.e. properties) change. The mechanism of evolution is shown in figure 2 (it is driven by demand and supply competition)

Figure 2 - Evolution



I've summarised some of the changes of properties in table 1. Ignore the class / phase part - that's part of a classification system which isn't relevant here.

Table 1 - Changing properties of an evolving component.


What this means is when you look at your Wardley map, then the components of the map have different properties depending up how evolved they are. This is summarised in figure 4. 

Figure 4 - Change of Properties.


Hence the Wardley map (figure 1) has the terms 'uncharted' and 'industrialised' at the top. When applying methods to a complex system then you have to use many methods because each method is suited to a different part of the map (see figure 5) whether it's project management or purchasing.

Figure 5 - Appropriate Methods


So when you start to manage your map, you treat each component individually and apply the right techniques.  In some cases you'll get variation in the high level maps because the node might represent an entire map underneath and has been simplified (see Sub Maps in Other Tools post). An example of applying the right methods is given in figure 6.

Figure 6 - Apply the right methods


Now the danger of outsourcing a large chunk of activities is that if you do this then it's probable that a single (and inappropriate) method will be applied i.e. a very structured method like six sigma will be used across all of the map. This will inevitably lead to change control costs overrun as the uncharted components will change (they are unknown, uncertain and require experimentation). There is nothing you can do to stop this cost overrun. You can't more fully specify the uncharted activities precisely because they're uncertain and unknown. Arguments inevitably kick off and the supplier will point to all the industrialised acts which didn't change and were delivered efficiency. The "fault is you" but this is NOT true. The fault is the application of a single and inappropriate method. I've summarised this in figure 7.

Figure 7 - How you don't want contracts to be structured.


Now, many companies have little or no situational awareness. They have no maps. So they outsource large chunks, get hit by excessive change control costs and invariably get told it was their fault for not knowing what they wanted and specifying enough. The vendor wins, the customer loses. This is pure hokum, it shouldn't happen and many vendors exploit this to their advantage. The reason for the cost overrun caused by excessive change control costs was simply the single method used. 

In general what you want to do is break down a system into small focused contracts which use the right methods i.e. you don't want some broad and widespread contract across the map. A comparison is given in figure 8.

Figure 8 - Contracts - Good vs Bad.


So, what's wrong with SIAM and the tower model? 

Well, lets look at it (see figure 9). The grand idea is to break things up into smaller chunks called Towers (which is a good move) but these chunks completely ignore evolution. They are arbitrary divisions such as application development, hosting, operations, end user services etc. These are all then reintegrated and combined with business retained IT organisation. 

Fortunately, some of those towers are quite commodity by nature e.g. IT infrastructure. So we do strike lucky twice with smaller contracts and a few towers being commodity. But other than that, it's up there with Bimodal IT - lipstick on a pig. It is ignorant of the process of evolution and of dubious long term merit despite being slightly better than the past.

Figure 9 - SIAM and Tower model


Of course, most companies have no situational awareness, no maps and many will think the Towers of SIAM will be a grand idea because (by pure probability of placing the right components into a small contract and some of the towers being more commodity like) then it will be slightly better than the past models. But it's also easy for a savvy vendor manipulate, not grounded on any understanding of change and simply breaking a system into towers and outsourcing those is far removed from where good management is. It's true merit can only be as a transitional move to break an even more undesirable 'outsource everything' approach.

I wasn't surprised to hear Alex Holmes wanted to tear down the Towers. It's time they were. Departments need to start learning how to manage things effectively. Some already are.


The Towers of salmon and hungry bears by Simon Wardley

My post on the Towers of SIAM reminds me of a story I was told by one senior 'US official'. He used the story to explain how Government procurement works. He called it 'Bears and Salmon' and it went roughly like this ...

"Some of these suppliers are like bears. They expect to sit in the stream that is Government, stick their jaws into the water and scoop up tasty mouthfuls of food. Every year they expect their bounty to get bigger and their stomachs to be fuller.

One day, the salmon (that's us) decide we're going to take a different stream, we're going to find a better way of doing things because we're not about feeding hungry beers, we're about making the lives of people better. The bears stick their jaws in the water. No Salmon! They get angry.

The bears roar "It's our fault". "We shouldn't be allowed to change" they cry. We get told that by changing streams then all the bears will die. It will all be a disaster because we didn't keep swimming in their stream and jumping happily into their jaws. "They are our partners" and we need to work with them.

I don't feel like a partner, I feel like lunch. The bears should move and we should change streams more often."

Well, over the years UK Gov IT has changed streams. It has made it clear that it wants to focus on transparency, make things better, not being not afraid to admit mistakes, use modern practices and focus on our needs (as in ALL of us). Judging by the negative reaction to Alex Holmes' sensible post then some bears are angry.

Don't be surprised by how ferocious this might become. As Computer Weekly points out 'while large systems integrators still dominate government IT spending – thanks to the revenue from existing long-term deals – their share of new IT purchasing has fallen.' 

First, it's important to understand why the changes need to be made. I think Derek Du Preez nails it with the following paragraph 

"Holmes’ stance is essential for the development of government-as-a-platform. My instinct when reading his post was that taking a hard line on towers is being done because there has been a realisation that if the government wants to get itself into a position where government-as-a-platform is workable, then they need to be expelled."

The idea of Government as a Platform, re-using commodity components and providing services between departments to improve efficiency and speed must be truly frightening for some bears. There's a lot of profit made in massive amounts of duplication and the constant rebuilding of something that has already been built. Of course, you can't build such a platform if every department is using a silo'd approach and busily breaking projects into contract chunks which they outsource. 

What people forget is that Government as a Platform is unlikely to lead to smaller IT in Government. Yes, it means the things they do now will be more efficient but this almost always leads to new services. There is a long tail on unmet demand. Vendors might not be able to flog their old wares but then they can adapt ... bears need to move to a different stream when the salmon do.

In the bad old days of 'cloud', you'd hear vendors warning IT folk in a company that cloud would get rid of their jobs. Cloud doesn't reduce IT expenditure, it's enables us to do more things. Today I can buy with a $1,000 over a million times more compute than in the 1980s. Has my IT budget reduced a million fold because of this? No. We've just done more stuff. 

Why? Competition.

As things become more standard and cheaper components then competitors build new things more quickly. We have to keep up, we have to compete. The same holds with nations. Some don't realise that other nations are looking at and building Government as a Platform. They will end up doing more stuff and creating advantage for their economies. We have to compete. Alas, that does mean the bears must move. They won't want to. 

In these situations, inertia to change (there are sixteen different forms) normally exerts itself more visibly. This inertia can be seen from both the vendors (the bears) and those customers (salmon) who've become a bit too close. Often vendors will play up inertia in customers in order to keep existing contracts. For reference, I've included the types of inertia you'd expect to see in table 1.

Table 1 - Inertia


So when John Jones, director of strategic sales architects, Landseer Partners, which works with a number of suppliers on bids, including on service towers and SIAM, said 

"Government departments currently lack strong contract management skills. They just don't have them. Chunking up procurements into smaller service towers rather than giving it all to one outsourced vendor, at least gave departments the time to acquire contract management skills for the future."

Then you have to ask, if UK Gov has been running outsourced contracts for a few decades but not acquired strong contract management skills in that time then something is clearly wrong. Yes, there might be a need to invest in knowledge capital along with some cost of acquiring skill-sets (which they should really have) but that's no excuse for continuing with the past.  Is this supplier helping UK Gov or playing to inertia in the organisation?

I'll be keeping an eye on this over the next few weeks, should be interesting to see what examples turn up.


Shakespeare at the BBC: As You Like It on Radio 3 by Feeling Listless

Radio In case you hadn't noticed due to the lack of posts, I've been a bit ill this week with the cold/manflu thing which has been going around. After not knowing what do to do with myself it's actually left me with cold sores, or as is the case now scabs, around the mouth and lips which makes it incredibly difficult to talk, or smile, or eat or do anything which a mouth and lips are meant to without some pain. Though it is getting easier and I have some antibiotics from The Doctor. Sorry, a Doctor.

All of which explains why I entirely failed to notice that the Drama on Radio 3 last week was a new production of As You Like It:

"A new production of Shakespeare's most joyous comedy with an all star cast and music composed by actor and singer Johnny Flynn of acclaimed folk rock band Johnny Flynn and The Sussex Wit.

Lust, love, cross dressing and mistaken identity are the order of the day as Rosalind flees her uncle's court and finds refuge in the Forest of Arden. There she finds poems pinned to trees proclaiming the young Orlando's love for her. Mayhem and merriment ensue as Rosalind wittily embarks upon educating Orlando in the ways of women.

With an introduction by Pippa Nixon who played Rosalind to great acclaim at the RSC and now reprises her role as Shakespeare's greatest heroine."
It's available to listen to here for the next few weeks.

You can also download it here.

[Not the streaming page says its only an hour and a half, but it is actually 2h 15m or so the cuts will presumably be pretty standard. I won't know until I can listen and I won't be doing that until I can smile properly.]


Are Extrasolar Worlds More Likely to Be Water-rich? by Astrobites

Water (still) seems to be important to harbour life

Scientists in many fields see the presence of water as a major ingredient for the potential of an ecosystem to support emerging life – however you want to define life. This might result from our lack of creativity to imagine beings in the absence of water, and is likely a direct consequence that we only have one “life benchmark system” which we know for sure. Of course, we are working hard on resolving this drawback of our ideas, as was just shown by researchers who modeled an oxygen-free cell membrane which might fit in the chemical environment of Saturn’s moon Titan. Anyways, since hydrogen is ubiquitous in the Universe and we can be sure that a watery environment acts as an excellent breeding ground, looking out for water in other worlds seems still to be our best bet.

Figure 1: Artist’s impression of a rocky and water-rich asteroid being torn apart by the strong gravity of the white dwarf star GD 61. Similar objects in our solar system likely delivered the bulk of water on Earth and represent the building blocks of the terrestrial planets. Credit: NASA, ESA, M.A. Garlick (space-art.co.uk), University of Warwick, and University of Cambridge.

Water delivery via accretion of small bodies

To be able to make a statement about the conditions all around us in the Universe we first need to understand where all the water on Earth came from. A nowadays favourited idea to explain the massive amount of water on the Earth’s surface is that most of it was accreted from water-rich impactors. In the early phases of its formation the Solar System underwent a phase of chaotic dynamics – the so-called Late Heavy Bombardement. We can see that this event most likely happened at approximately 4 billion years ago in lunar rock samples from the Apollo missions. During this time a high number of asteroids collided with the terrestrial planets. But why is this important for the water on Earth? Since the Earth formed at a distance to our Sun where the temperatures are high in the protoplanetary disk (a lot of astrobites already exist on this topic here and in general here) not much water was incorporated in its bulk material. (This is also explained comprehensively in this astrobite.) However, the small bodies in the outer parts of the Solar System formed farther away and therefore harbour higher volatile (chemical species that go into gas phase easily, so including water) reservoirs than the inner terrestrial planets. This lead to the thinking that most of the water we see in the oceans today was brought to our planet after its formation.

What about other planetary systems?

When looking at other systems far away from ours, we can ask: did they emerge from the same conditions as our own? Probably not always. Sure, we find a multitude of systems to resemble the Solar System, systems with multiple planets, super-Earths and so on. However, there is a crucial ingredient which determines the amount of volatiles in the primitive planetary bodies: the amount of heating by short-lived radioactive isotopes like ^{26}\rm{Al}. These dominated the heating in early Solar System bodies and are the cause for most of their volatile loss (by outgassing to the surrounding). ^{26}\rm{Al} is likely variable across different planetary systems around solar-type stars and therefore other planetary systems might have more volatiles and thus more water!

The consequences for systems with more water

And that’s now the author’s approach in this study. They adopt conditions for higher volatile enrichment  than in our Solar System and calculate the gravitational dynamics and evolution of the bodies in these systems. To do so they use the N-body code MERCURY which models each body around the star as a particle and then calculates the gravitational interactions in between them and the resulting orbits and eventually impacts. Each individual particle is given a certain volatile mass fraction. They also account for the differences in stellar masses and the resulting changes in the extent of the habitable zone (in which liquid water can exist) and the mass of the disk around them (higher stellar mass in principle means there was more material available in the beginning and thus we also expect more material in the surrounding).

bwrwreger

Figure 2: Accretion history of one of the simulations for bodies with higher volatile reservoirs in the outer parts of the planetary system. Plotted is the radius of the bodies versus the eccentricity of their orbit. The color scale represents the water mass fraction, from dry (red) to very water-rich (dark blue, violet) in log units. The bodies are initially aligned with zero eccentricity and then evolved in time. After some Myrs larger bodies form from accretion of smaller ones and incorporate their water contents. Credit: Ciesla et al. (2015)

Figure 2 gives an idea about how the outcome of such a simulation looks like. Initially all bodies are arranged in a certain distribution around the central star and then are dynamically evolved. After a certain amount of time most of the initially small bodies have been accumulated to bigger bodies and the resulting small ones are on very eccentric orbits. Now, running simulations with initial values alike the Solar System and with higher volatile fractions enables the researchers to study the impact of such changes.

So, plenty of Kaminos out there in the wild?! Kamino?

Cumulative distributions

Figure 3: Cumulative distribution of water mass fraction vs. separation from the host star. Counting all planets which are built in the simulations with a certain stellar mass this is the distribution of water-enrichment the study ends up with. We can clearly spot at least three different regimes: dry bodies in the very inner parts of the systems (red), very water-rich bodies, which tend to be more in the outer regimes of the systems (blue to violet) and intermediate-enriched bodies, which are found in the range of 0.2 – 0.7 AU from the star (green). Credit: Ciesla et al. (2015)

Sadly, we cannot know for sure at this point. We can imagine a lot, put as much knowledge, hard work and precision in our simulations as we can – in the end all our physical descriptions of the real world out there are just approximations, some better, some worse. Therefore, the authors of this study try to identify trends in the results of their simulations that might in the future enable them or others to compare their outcome with the distributions of observed exoplanets (see Figure 3). This is a very hot topic in exoplanetary research – do statistics with the observed distribution of planets and allocate trends to understand the origin of planets in general. What they find is that adding water to the outer bodies in comparison with the value of the Solar System leads to much wetter planets. Additionally, they find that low mass stars are the host of of more planets of lower masses, whereas in comparison the stars with higher masses end up with less planets, but more massive ones. These are relevant descriptions of the stochasticity of planet population synthesis (the study of the planetary population on a statistical basis) and thus gives clues whether our ideas are right and may or may not coincide with reality. This eventually enables observers to refer to when we have detected and characterised (!) enough exo-worlds to do reliable statistics about it and be in a state to find out whether at least some of our understanding withstands a critical check.


A First Time for Everything: Blitzing Congress for Space by The Planetary Society

There is a first time for everything. Riding a bike, stargazing, and yes, even lobbying Congress. Jack Kiraly describes his first Legislative Blitz with Michael Briganti and Casey Dreier on Capitol Hill last week.


Mini mission updates: Dawn in orbit; Curiosity short circuit; Rosetta image release; Hayabusa 2 in cruise phase; and more by The Planetary Society

Dawn has successfully entered orbit at Ceres, becoming the first mission to orbit a dwarf planet and the first to orbit two different bodies beyond Earth. I also have updates on Curiosity, Rosetta, Mars Express, Hayabusa 2, the Chang'e program, InSIGHT, and OSIRIS-REx.


Dawn Journal: Ceres Orbit Insertion! by The Planetary Society

Dawn's Chief Engineer, Marc Rayman, gives an update on the mission's highly anticipated arrival at Ceres.


Seeing Ceres: Then and Now by The Planetary Society

Technology writer Paul Gilster shares his interest in how we depict astronomical objects, focusing on the dwarf planet Ceres.


March 06, 2015

Briefly by Charlie Stross

I am still suspended head-down in a vat of boiling edits. The deadline is next Friday, so don't expect normal blogging to resume before then.

(The book in question, "Dark State", is tentatively due out from Tor in April 2016—assuming I can hit that deadline.)


Towers of SIAM, trade associations and Civil Servants. by Simon Wardley

A few weeks back, I picked up on Alex Holmes post on the GDS website - "Knocking Down the Towers of SIAM". As a regular reader of UK Gov's GDS blog (well, I'm a British Citizen who takes an interest in what UK Gov does) then I was really pleased with this post. 

Why? Two reasons.

First, it's great to see my Government talk clearly in the open about these issues. I love this sort of transparency. In days gone past, many of these conversations would have been buried in huddled rooms with suppliers. I, as ultimately one of the many million of UK Citizens that pays for, votes and in effect employs UK Gov was usually left in the dark. 

The second reason was the post tackles a problem with good sense and is not afraid to admit the errors of the past. There's no hubris, just honesty.

From the post ...

"A fundamental part of our guidance was about taking accountability for decisions about technology and digital services back into government. For large parts of the Civil Service that had so completely outsourced their IT, this meant a massive shift in approach, which takes time and can be scary."

Spot on. There was a huge problem in the past with departments becoming reliant on vendors and having little ability to challenge. It's pleasing to see this change.

"This fear of change meant some organisations clung onto the concept of outsourcing, which they understood, but they also wanted to comply with the new policy of multi-sourcing IT provision – something that is recognised as best practice across the industry."

A normal reaction caused by inertia is to attempt to bring the past world of practices into the new world. In the old days, this sort of stuff usually got hidden under the carpet. No more.

"Unfortunately, the combination of these two forces created a hybrid model unique to government. The model is usually referred to as the Tower Model.  It combines outsourcing with multi-sourcing but loses the benefits of either." 

Clear and precise without trying to hide the issue. Something that was implemented was interpreted in a different way. 

"The model has arisen because organisation have used a procurement-led solution in response to legacy outsourcing contracts ending. Rather than changing their approach and emphasis, they have ended up outsourcing their IT again, but in pieces. It was still all about us, not about the needs of our users. "

Ah, the nub of the problem. Rather than apply the right sort of practices (agile for the uncharted, six sigma for the industrialised) and use the right sort of purchasing arrangements (see figure 6 from this post on Government and Purchasing) we've ended up with a undesirable hybrid. 

I'm so tempted to shout out "Go and Fix IT!" but wait ... 

"Organisations have adopted the Tower Model, believing they are following government policy and using best practice, but they are doing neither. I am now writing this post to be clear that the Tower Model is not condoned and not in line with Government policy."

Excellent. Outline the problem, explain the source and tell us what you're going to do about it. I couldn't be more pleased except the post then goes on to apply all sorts of common sense. 

"An important point about multi-sourcing is that different things are bought in different ways:  there is no “one size fits all” methodology.  Commodity products like hosting will still likely be outsourced to utility suppliers, but novel or unique things close to the user may be built in-house.  And components can – and will – be changed often."

"The Tower Model doesn’t work because it doesn’t fully consider what services are needed, or how they fit together and it uses a “one size fits all” methodology.  It relies on procurement requirements to bundle together vertically-integrated outsourcing contracts called things like ‘network’ or ‘desktop’. It also usually outsources the service accountability, architecture and management to a third party."

This is mana from heaven. I'm so pleased, thrilled and happy to hear UK Gov talk about projects in this way. Common sense, frank discussion, transparently and direct to me and the many million of other UK Gov employers (i.e. us voters, when it comes to Government then "they work for us"). This is a million miles away from secret conversations, voters being kept out of the loop and all the other nonsense that comes with such. This one post shows me how good UK Gov IT is and it couldn't make me happier.

However, I was astonished when I started to read some of the comments and subsequent press articles. Many of these were dubious at best and based upon a premise that the past was somehow better. I could spend weeks taking them apart but by fortune, I came across this little gem. 

In the post, techUK was

"surprised and concerned" by GDS announcing "significant change in policy" through a blog post "without any opportunity for industry consultation"

Ok, who are techUK? Well they're a trade alliance that represent companies that provide technology to Government. They help their members and the technology sector to grow. I'm sure they do a lot of good things.

BUT they also 'promote, encourage, foster, develop, co-ordinate and protect the interests of the Members' which. to you and I, is common tongue for lobbying.

I noticed they had suggested "industry" consultation rather than "public" which means firstly introducing a long delay through consultation and secondly we (as in the public) might not get to hear what's happening. I had images of cosy chats kept secret for reasons of commercial confidentiality. Based upon past experience, I prefer the UK Government route of being transparent and moving quickly.

I continued to read.

Apparently the GDS post "has caused our members some alarm". As a UK voter, I was delighted by the GDS post and so this didn't make sense? I'd be alarmed if UK Gov IT wasn't making things more efficient. But maybe that's the problem because making things more efficient usually means someone else isn't making quite so much profit. Could this cause alarm to a member's interest?

It went on ..

"Government must now engage with the whole of the industry, large and small, to help shape future transition and strategy and agree the best way to define and procure the solutions they need."

Again no mention of us (the public) but then it's an industry association I suppose. It felt a bit rich for a trade association to tell Government that it "must now engage". I'm sure I voted for Government to be in charge and not anyone else. I can't imagine that if one of techUK members decided to change their purchasing policy that their supplier would make such demands of them. 

It kept on nagging at me but who is standing up for us (as in the public) then?

So I re-read Alex's post. It was clear that Alex didn't say "Government should focus on a trade association needs" but instead that Government should "focus on user needs" i.e. all of us including you and me. That makes sense. Of course, to do this Government must find the best way to procure the solutions that we need.

We should remember, the whole problem with the past was that Government had outsourced so much it couldn't challenge what its vendors (or 'partners') were saying. This was then though and as Alex made clear - Government needs to decide, it needs to take responsibility. 

By now, I'm starting to feel a bit uncomfortable with the special pleading of a trade association. My sarcasm dial had reached 9. So, to speed the process up I decided to decipher their text into what I think they actually mean or plaintext as I like to call it.

"What is going to happen to current projects and those in the pipeline?"

Plaintext : Some of members have got interests in the projects in your pipeline and are worried that changes might hit their interests.

"What was the evidence base used to cause this change in policy?'

Plaintext : We need you to stop this.

"Will this new approach endure? Or will industry and departments be looking at another change at some time in the future?"

Plaintext : We're going to try and stop you.

It then went on to add a three point plan under the title "techUK outlines its plan for better public services". I was surprised by this. Let us think about this title and be clear about what it means.

A trade association whose memorandum includes protecting members interests is outlining a plan for better services for the Government when some of its members have been providing the services that we need to make better? 

My sarcasm dial went up to 10.

Before starting to read the plan, I wrote down a couple of things that a worst case pleading scenario would have i.e. flogging innovation for stuff that doesn't matter, claiming to do stuff the Government was already doing, emphasising how it would help in some form of "partnership".  So, I opened the link and read the three point plan. It was - engagement, information and innovation.

My sarcasm dial went all the way up to 11. 

At this point a gentle bit of ribbing feels in order. I can't help but add my own translation of this plan with sarcasm set to the max (i.e. snarkytext).

"Better engagement, to support civil servants earlier in the process and help develop policy with technical expertise. techUK members are committing resource to engage much earlier in the process, ensuring officials develop policy with a proper understanding of what technology can do."

Snarkytext: In the good old days, UK Gov had outsourced all its capability to challenge any proposal made and was dependent upon suppliers. We feasted. We'd like to feast again. We are your "partners".

"Better information, providing standardised, transparent reporting. This will overcome the problems of wildly varying reporting requirements on public sector contracts, which had the effect of making one scheme impossible to compare with another. The industry will agree a standardised data and evaluation scheme, allowing Government to pick and choose suppliers more effectively."

Snarkytext: We are going to tell you what information you need and that's all you're going to get. We're going to call it transparency because we know you do a lot of this. But your kind of transparency is not the kind of transparency we want to use. We're going to tell you how to pick and choose vendors. We're going to tell you it's more effective that way. We'd like to feast again. We are your "partners".

"More innovation, giving civil servants the opportunity to experiment and explore solutions in a risk-free environment. techUK's 'innovation den' model will be used to provide a test platform for new projects, and is designed to overcome the problem of public sector innovation being strangled by the fear of failure. techUK will develop a 'techmap' of suppliers, ensuring Government is aware of all the options available to them."

Snarkytext: In the good old days, suppliers feasted because you didn't take responsibility. Civil Servants shouldn't take responsibility. We want to make it risk free. We like risk free because despite the past history of massive IT project failures, you still pay through the nose for it. We also like innovation because you pay through the nose for it. We'd sell you innovative sand if we could. We have a 'den'. If you want to play in our 'den' then give us what we want. We'd like to feast again. We are your "partners".

I'm sure trade associations like techUK do a lot of good work but UK Gov IT has been clear. They want to be transparent, they want to make things better, they're not afraid to admit mistakes and they want to focus on us (as in ALL of us). They want the future. It's time for others to adapt.

Alex is spot on. It is time for government to put user need first and take back control of its IT


Tumblr. by Feeling Listless

About During the week, Tumblr emailed to remind me that I set up one of those eight years when it was still in beta testing.

Across time it's generally been a place to collect together content from various places and I've decided (now that If This Then That is intermingling the APIs) to try something similar again.

Here it is.

At present it's posts from this blog, links from Twitter and flickr images. This will hopefully be the first post if everything is working properly.  I know some of you prefer it over there (or here if you're reading this on Tumblr) so here we are.  Hope you like the choice of title bar.

Not that I actually understand Tumblr in the same way that I didn't understand LiveJournal back in the day.  Did you know LiveJournal's still going?


Other tools I use with mapping by Simon Wardley

In this post I'm going to discuss some of the other tools I use with mapping. I'll start by expanding on the map I created in the Introduction to Wardley Maps post. This original map was used to determine user needs, strategic play, remove duplication and bias, enable collaboration between groups, communication and identify appropriate methods (whether project management or purchasing). See Figure 1

Figure 1 - Basic Map



Sub Maps

Sometimes maps can get very big and very complex. Hence I tend to use a high level map (as above) with several of the nodes representing a sub map. You lose granularity with high level maps in the same way that an atlas loses granularity of the street level. This is fine, the high level map is what you use to determine your overall play and the sub maps often contain quite tactical details. Hence you might use a high level map in the board but a project manager in one area of the business might use the high level map (to see how their part fits into the whole) and sub maps giving more details.

To create sub maps is trivial. Start with the component node as the new NEED i.e. if you select streaming service (in the above) then your users are Website and Internet broadcast and they both have a NEED for streaming service. See figure 2

Figure 2 Maps and Sub Maps


RISK

Critical Path / Fault Tree

I also use maps to help determine critical paths and fault trees i.e. if something happens to my recommendation engine where does that impact me? In the above scenario I can still commission shows and deliver a streaming service but it does impact the web site. If however I lose internal power then multiple systems can be impacted and no service is delivered. Hence I try to mitigate this with multiple sources of power or by pushing a higher order system (such as compute) into a more resilient environment. In this case because compute is a commodity, I'd tend to use a volume operations based service in which I distribute my risk across a wide geography (e.g. cloud services like AWS). I've shown an example of such analysis in figure 3.

Figure 3 - Fault Tree / Path analysis



Contract Analysis

One very useful technique is to overlay any existing contracts onto the map. Ideally, you want to have small contracts focused around each node rather than large all encompassing contracts with outside parties. The reason for this is that the methods you need to use vary with evolution (i.e. agile one side, six sigma the other) and if you have a large contract then what tends to happen is a single method (e.g. six sigma) is applied across the lot. More often than not, this is very costly in terms of change control with those uncharted activities. This is because they are going to change and a structured method will tend to punish change. You really should be using agile methods in that uncharted space.

Always break down those large contracts (unless you're a vendor in which case, take advantage and feast on the inevitable change control cost overruns caused by them). In figure 4 I've given an example of good and bad contract structure.

Figure 4 - Contract Structure and Maps



Methods

I've covered this many times before but I cannot emphasise enough the importance of using the right purchasing and project management methods within a large scale project. I use METHODS deliberately because there isn't a one size fits all method but instead you need to use many. Figure 5 provides a visualisation of where each method is appropriate.

Figure 5 - Project and Purchasing Methods


For heaven's sake, don't confuse those polar opposites of uncharted and industrialised with a need to create a bimodal structure. There is a huge gap between them. 


Anticipation

This is more a board level issue but worth noting. One of the huge advantages of mapping is organisational learning.  You quickly discover common economics patterns (from componentisation effects to creative destruction to the peace / war and wonder cycle) along with methods of manipulating the environment to your favour (open approaches, constraints, FUD etc). There's lots of games you can play and get good at.

When it comes to anticipation then weak signals are critical (there are numerous you can use). One of my favourite long term signals is publishing type (i.e. how the activity is described in common publications). Do remember that you can map activities, practices, data and knowledge and the weak signals apply across all.

With the peace, war and wonder cycle then the part which can be most anticipated is the 'war' phase. This phase can be highly disruptive but since you can anticipate it, then you can also prepare for the change. It's worth noting that are two forms of disruption - the more predictable product to utility substitution which creates the 'war' and the unpredictable product to product substitution. Don't confuse them.

Hence, I'll often look for 'predictable' points of change by identifying the wars (see figure 6) and then examining my map (see figure 7) to see where I will be affected.

Figure 6 - Points of Change and likely future 'Wars'



Point 7 - Points of War in your value chain.


So from the above I know that compute is undergoing drastic changes towards a utility (in fact we knew this was imminent in 2004). I know that traditional media (DVDs) is being impacted along with aspects like CRM that are on the verge of change. Recommendation Engines and Streaming services haven't industrialised yet (but remember this map is several years old). They will.

As with other changes of this type then I already know the impact (disruption of the past stuck behind inertia barriers, new forms of un-modelled data, co-evolution of practice, rapid explosion of higher order systems that use the component etc) before it has happened. I know that I'll also have inertia to the change but by being prepared and knowing where it will hit means that I can do something about it.


Business 

When is comes to business, then there are two aspects of mapping I use. The flow of money through the map and competitor analysis. These are quite detailed topics but I'll summarise.

Competitor / Market Analysis

I often use maps to look at competitors. This can be either by examining their value chains to identify any inertia they might have to predictable forms of change or I might look for any differentials that they have with us. I'll also use maps to look at different markets. This sort of analysis can give me an idea of where I need to protect, against whom, how I can exploit the situation to my advantage, and which region future competitors may come from. When I'm looking at different markets, I examine whether that market is more evolved or more emerging, whether the activity is price elastic or not, what constraints exist and what games (e.g. an open play) are being used in the wider market.

I've given an example of such a scenario planning exercise from a different industry in figure 8 and I'll often run games around these to tease out impacts and edge cases.

Figure 8 - Scenario Planning



Business Flow

I like numbers particularly when those numbers represent cash in a profitable way. When we think of business, I tend to think in terms of flows of revenue and cost. I hence use the map to create different flows through the system, identify cost and revenue and create business flow diagrams (see figure 9).

Figure 9 - Business Flow Diagram



Sometimes I'll find one business flow is more profitable than another. Sometimes an act will evolve and a particular flow will become unprofitable or another will become profitable or a further will become possible or most likely a combination of all the above will happen. These diagrams can be a bit tedious hence it's a good idea to like making money. However, the upside is that you can often find entirely new ways of making revenue during the process and it certainly helps you to focus on this.

Opportunity

I'll use maps to look for opportunity i.e. points which we can attack. An example is being the first mover to create an industrialised component and then building an ecosystem on top.  I might also look for areas where we might wish to gamble and experiment building an uncertain need. 

My preferred approach is not to gamble but to be the first mover to industrialise and the fast follower to any new activities built on those industrialised components. This is a technique known as Innovate, Leverage and Commoditise. You commoditise an act, get everyone else to Innovate on top of it and then mine (i.e. Leverage) the consumption data of your ecosystem to spot future successful innovations that are evolving.  You can then Commoditise those activities (or data) and rinse & repeat. See figure 10.

Figure 10 - ILC play.



Operation

Team Structure

Operations is one of my favourite topics (along with strategy) but in this case, it's all about getting stuff done. Its the execution side and that means people (but not in a Wolf Hall sense). I first start by using a map to break out the team structures that I'll need - see figure 12.

Figure 12 - Team structure


Now, I almost always break the map into small autonomous and self organising teams (i.e. two pizza model, no team bigger than 12 people). However, I cheat a bit by overlaying attitude (a technique known as pioneer, settler and town planner) to make sure I've got the right sort of attitude and aptitude in each team. I encourage everyone to work on FIST principles (Fast, Inexpensive, Simple and Tiny).

Now, if you're successful then the size of the system will grow. Hence, as this happens I tend to break down larger components into smaller components in order to keep the teams small (think Starfish model). It's worth noting that the lines between these the nodes on the map are interfaces of communication. I'll use these to create fitness functions for each team, so everyone knows what each team is supposed to be doing i.e. what needs they are meeting, their metrics against those needs, business flows etc. The maps help glue it all together to give a picture of what we're doing and where we're attacking.

Before you ask about the all important "Why?" of strategy. Do remember "Why?" is a relative statement as in "Why here over there?". Maps help you find the many wheres and hence determine the why. Strategy however is another discussion which I've touched upon in many previous posts.


Scheduling
For scheduling between teams, I use Kanban. I don't see the point of using anything else.


Evolutionary Flow
In some circumstance I'll create a profile for a system when it's a company or a line of business (LOB). This profile is useful in understanding the type of people I'll need to recruit and the balance of the organisation. These topics are beyond the scope of this post but such a diagram can be simply created by counting the frequency of components in each stage of your map - see figure 13

Figure 13 - A profile




Validation

Finally, it's worth mentioning that I use a couple of tools to help validate the maps. Despite my teasing of SWOT diagrams, I do find them useful in some circumstances. However the grand-daddy of them all is business model canvas (BMC) and my natural preferred choice.

Once I have a map from which I've removed bias, worked out strategic play, created the business flows and built an operational structure then I have everything I need to fill in a BMC. For me these are exercises in checking that I didn't miss anything obvious. I never start with a BMC but I do tend to use them in finishing stages my first draft of a map.

Why first draft? Well, maps are dynamic. The environment will change because everything evolves through competition. You can't stop it, just get used to it. Hence once I've validated my first draft then I tend to live of it and rarely will go through another validation exercise. We're in the game now and I know my maps will improve the more I play.


Valuations: Public and Private Markets Misleading Each Other? by Albert Wenger

With the NASDAQ going above 5,000 for the first time since the year 2000 valuations in tech are once again on everyone’s mind, mine included. It has been a long period writing here on Continuations that I have thought valuations were too high and they have only gone higher since. All the arguments I provided back in 2012 are still true and in particular the line that “rates of return available on many other investments are at historic lows” — this is especially true for interest rates. Very low interest rates provide double fuel for stock prices. First, because investment dollars migrate from fixed income to equities. Second, and more importantly, because discount rates are low. If you have ever built a DCF model you know just how insanely sensitive the valuation is to the interest rate. And with technological deflation it is actually a reasonable expectation that interest rates will stay super low.

Yesterday, Mark Cuban wrote a click-bait titled rant on valuations which I read after it had popped up multiple times in my Twitter stream. While I didn’t think it was entirely coherent it did give me an interesting idea though for a weird interaction between public and private markets that I had not previously considered. During the original Internet Bubble, public market valuations relatively quickly sky rocketed in an all out frenzy. Private valuations didn’t have that much time to follow because a lot of companies went public quickly and then the bubble burst.

This time round things have been growing much more gradually over time and the supply of public companies has been small compared to the amount or private wealth creation as companies have been slow to go public or have avoided it altogether (see my related posts on still waiting for IPO 2.0). So now we have a different phenomenon: demand in the public markets outstrips supply which results in well higher prices. But then in turn it is those high public market multiples that inform private market valuations. And voila you have a case of MC Escher's famous picture of hands drawing each other (dear Internet: someone please put “public” and “private” on the hands and have them write 10x each; addendum: gorgeous illustration provided by Alec Hutson).

Because it is happening gradually and because the logic looks internally consistent (and add to that the low interest rates), this could continue to go on for quite some time. This strikes me as the classic case of Nassim Taleb's point about fat tailed distributions where it is the higher order moments (kurtosis) that really matter. So the process looks very smooth and gradual for quite some time until there is a sudden and fairly violent swing


How old is the Hyades? by Astrobites

Title: Bayesian ages for early-type stars from isochrones including rotation, and a possible old age for the Hyades
Authors: Timothy D. Brandt & Chelsea X. Huang
First Author’s Institution: Institute for Advanced Study
Status: Submitted to ApJ

Image credit: Bob King

The Hyades open star cluster. Image credit: Bob King

The Hyades cluster forms the head of Taurus the bull in the zodiac constellation. It is one of the most famous open clusters—a group of stars that all formed at the same time from the same cloud of gas. This cluster was thought to be 625 million years old, however new research suggests that the Hyades is much older. This makes for a slightly awkward situation; the Hyades underpins our understanding of stellar ages. If its age is wrong then a lot of other ages are wrong too…

To understand why the Hyades plays such an important role in stellar dating it’s necessary to explain a little bit about how the game works (and what a mess it is!).

Isochrones, models and interpolation

It can be difficult to tell the age of a main-sequence star just by looking at it; changes that happen deep in the stellar core don’t have much impact on their outward appearance. Stars do change a little in brightness over their main sequence lifetimes, and by measuring their luminosities in different colours we can use some (slightly convoluted) trickery to infer their ages. We can measure the apparent magnitude of a star in different colours—this is a pretty easy measurement, so available for lots of stars. You can almost think of it as a really low resolution spectrum. If you know how far away your star is, from parallax measurements, for example, you can figure out the star’s absolute magnitude in those colours.

Figure 1. A Hertzprung Russell diagram (similar to a colour-magnitude diagram, with temperature instead of colour) from this paper. Each line is an isochrone—a line of constant age for a given mass and metallicity. The age of a star is found by matching its position on the diagram to the nearest isochrone.

Figure 1. An example Hertzprung Russell diagram (similar to a colour-magnitude diagram, with temperature instead of colour) from this paper. Each line is an isochrone—a line of constant age for a given mass and metallicity. The age of a star is found by matching its position on the diagram to the nearest isochrone.

The next step involves matching the observations up with theoretical predictions. Complex physical models of stars are used to produce predictions of the luminosity you would expect, given a mass, metallicity and age. You can’t do this for all values of mass, metallicity and age —that would take forever! The model predictions are therefore computed on a grid; at discrete intervals of the three properties, to produce isochrones. Isochrones are lines of constant age on a colour-magnitude diagram, or Hertzprung-Russell diagram (see figure 1). Two stars that lie on the same isochrone should be the same age and their mass and metallicity define where on the isochrone they sit. By finding the position on an isochrone that is closest to your star’s position on the colour-magnitude diagram you can estimate its mass, age and metallicity. Even better—don’t simply match your star to the nearest isochrone, but interpolate across the grid to find the values of mass, metallicity and age at the exact position of your star.

I should mention by the way, this method of finding an age for a star isn’t great. It’s still the best thing we can do, but ages measured by isochrone fitting tend to have uncertainties of at least 50% and often more than 100%. That’s just because, like I said, stars don’t change their outward appearance very much over their main-sequence lifetimes.

The method outlined above is standard practise. So what’s new in this paper? Well, usually, these theoretical models that predict a luminosity for a given mass, metallicity and age don’t include the effects of stellar rotation. All stars rotate though, and some rotate quickly enough to make a big difference to their positions on the colour-magnitude diagram. Stars that rotate rapidly fling out the material around their equators, increasing their radii slightly. The equatorial parts of the star are further away from the hot stellar interior and are therefore a little cooler and less luminous. Both of these factors affect the star’s luminosity. Try to fit a rapidly rotating star to an isochrone that doesn’t take rotation into account and you may well measure the wrong age for that star.

The marginalised posterior probability distribution of the Hyades’ age and metallicity. The two contoured regions encircle the most probable values for the age and metallicity of the cluster. The left-hand high-probability region shows the result for models without rotation. The most probable age for the Hyades, using non-rotating models is around 750 Myrs. The right-hand region shows the result for models with rotation. The most probable age for the Hyades using rotating models is around 950 years.

Figure 2. The marginalised posterior probability distribution of the Hyades’ age and metallicity. The two contoured regions encircle the most probable values for the age and metallicity of the cluster. The left-hand high-probability region shows the result for models without rotation. The most probable age for the Hyades, using non-rotating models is around 750 Myrs. The right-hand region shows the result for models with rotation. The most probable age for the Hyades using rotating models is around 950 million years.

The Hyades’ age problem

The authors of this paper use rotating stellar models to measure the age of the Hyades cluster and, guess what? It turns out to be older than previously thought. It’s gone from a youthful 625 million years to a, slightly less youthful but still pretty youthful 950 million years. Of course, this difference is miniscule when you consider that most Hyades stars will live for billions of years; aren’t I splitting hairs? Here’s the thing though: stellar ages are really really hard to measure for main sequence field stars, but much easier for cluster stars. That’s because they are stellar populations – they’re all the same age and roughly the same metallicity. You can fit an isochrone to an ensemble of stars —that gives you a much tighter constraint on its age (assuming the stellar models are correct, of course). For this reason, clusters are used to calibrate other stellar dating methods. Gyrochronology, for example: the method of dating a star from its mass and rotation period relies enormously on the fact that the age of the Hyades has very small error bars! If the age of the Hyades is wrong, that could have a serious domino effect. Hundreds of stars could have the incorrect age as a result.

Posteriors, posteriors, posteriors!

Everything done by the authors of this paper do is probabilistic. They don’t report one age for the Hyades, they report a probability distribution over ages (and provide samples from the posterior probability distribution). They even provide an awesome web interface where you can enter the name of a star, along with a rough estimate of metallicity (actually a metallicity prior), and it spits out the posterior probability distribution for the age of that star. The posterior probability distribution for the Hyades’ age and metallicity is shown in Figure 2.

A resolution?

This conflict may be resolved soon—the Kepler spacecraft (now reincarnated as K2) is currently observing the Hyades. It will be able to detect asteroseismic oscillations in some of its stars, revealing their true ages. Hundreds of inferences rely on the age of this cluster—unveiling the mystery will be an exciting moment for stellar astronomy!


Subscriptions (feed of everything)


Updated using Planet on 26 March 2015, 06:48 AM