Francis’s news feed

This combines together on one page various news websites and diaries which I like to read.

April 26, 2015

A few gems from the latest Cassini image data release by The Planetary Society

I checked out the latest public image release from Cassini and found an awesome panorama across Saturn's rings, as well as some pretty views looking over Titan's north pole.


New Horizons One Earth Message by The Planetary Society

The One Earth Message Project is going to send a message to the stars, and we invite members of the Planetary Society to join us in this historic endeavor.


LightSail Readiness Tests Prepare Team for Mission Operations by The Planetary Society

The LightSail team continues to prepare for the spacecraft's May test flight with a series of readiness simulations that mimic on-orbit operations.


April 25, 2015

One of the most important articles Buzzfeed has ever published. by Feeling Listless

People An example of I can't believe this has to be said and we still live in a world in which this still has to be said, Bim Adewunmi (of The Guardian and Buzzfeed here) writes one of her best pieces about the perception that articles about non-white people should for some reason be described as being about non-white people. For some reason.

For those who urged the inclusion of the word “black” in the headline of the beauty tutorial article, I want to ask: Do you require the lists elsewhere on the internet to include “white”, ever? Does “diversity” matter to you when these kinds of lists, and others, are populated entirely by white people, sporting “fair n silky hair” and “super pale palettes”? On how many posts have you felt the need to call for diversity, when those posts had black and brown faces sprinkled through them like stray beans in a pot of rice?
Yes, exactly.  Why doesn't something somehow become not for you or about you if it doesn't feature someone who looks like you? 


April 24, 2015

Gov should start handing over large wads of cash to us, preferably in a truck by Simon Wardley

The latest piece of craft from Kat Hall on how a “GDS Monopoly leaves UK.gov at risk of IT cock-ups” was interesting, to say the least. I’m sure Kat Hall is under pressure to write articles, I’ve seen the Register help create some very fine tech journalists (see @mappingbabel) and I have no doubt Kat will follow the same path. However, this one instance was not a finest hour. 

I’ll leave it at that though because my real interest lies with the report and not a debate over "what is journalism". Before promoting a report, I tend to ask some questions - why was it written, why now, who wrote it, what was it based upon and how will it help? I do this because a lot of stuff in the ether is just PR / lobbying junk dressed up as helpful advice.

At this moment in time (due to the election), Civil Servants are governed by the Purdah convention which limits their ability to respond. What this means is that any old lobbying firm can publish any old tat knowing they’re unlikely to get a response. Launching an attack on a department at this time is about as cowardly as you can get. These people are public servants, they work hard for us and a bunch of paid lobbyists or consultants taking swipes is not appropriate.

The report “GOVERNMENT DIGITAL SERVICE 2015” was written by BDO. They’re a big consultancy working in the public and commercial sector, with a glossy web site and lots of pictures of smiling, happy and clapping people. They talk a lot about innovation models, exceptional client service and “value chain tax planning”.

The report starts with GDS has been an effective catalyst for transformation (basically, let's be nice to you and pretend we're friends before we bring the punches out) and then goes on to proclaim major risks which need to be sorted! I’m already starting to get that icky feeling that “major risks which need to be sorted” is code for “pay us lots of money”. 

Ok, the three major risks are highlighted as accountability, commercial and efficiency. We will go through each in turn.


THE ACCOUNTABILITY RISK: 
“GDS’s hands-on approach to advising programmes reduces its independence as a controls authority”.

A bit of background here. Many years ago, before writing the Better for Less paper, I visited a number of departments. All these departments suffered from excessive outsourcing i.e. they had outsourced so much of their engineering capability they were unable to effectively negotiate with vendors as the department was often little more than project managers. In the Better for Less paper we talked about the need for intelligent customers, that the current environment had to be rebalanced, and that we had to develop skills again in Government. Now, this excessive form of outsourcing wasn’t a political dogma but a management dogma. It’s why we used to be paying through the nose for stuff which wasn’t often fit for purpose. With a bit more internal skill, I’ve seen £1.7M contracts tumble to £96,000. Yes, 95% savings are not unheard of. 

However, it's not just GDS. There’s many Departments, the Tech Leaders Network and systems like G-Cloud which have made a difference. A very important factor in this was OCTO (Spend Control) and their introduction of a policy of challenging spending. 

The report says “Accountability is the key to risk management and accountability must always be with the department that holds the budget and is mandated with the service” and that has always been the case. The Departments are accountable and they hold the budget.  However, CHALLENGE is an essential part of effective management and that requires the skills necessary to challenge. 

To explain why this is important, I'll give you an example from a Dept / Vendor negotiation which in essence was little more than :-

Dept. “What options do we have for building this system?”
Vendor “Rubbish, Rubbish, Us”
Dept. “Oh, we better have you then. How much?”
Vendor “£180 million”
Dept “Could you do it for £170 million?”
Vendor “Ok”

It wasn’t quite like that as the vendor had to write some truly awful specification documents and options analysis which it charged an eye watering price for under a fixed preferred supplier agreement. There was a semblance of a process but no effective challenge. You couldn’t blame the department either, the past mantra had been outsource all and they didn't have the skills to know what was reasonable. I’ve seen exactly the same problem repeated in the commercial world numerous times - departments operating in isolation, alone, without the skills required. They are easing pickings.

GDS and Spend Control changed that by forcing some challenge in the process. Of course, if you’re used to chowing down on Government as an easy lunch then those changes probably haven’t been very welcome. Whilst some Departments were bound not to like being asked hard questions - “but, it’s our budget” - others responded by skilling up with necessary capabilities. 

You can’t separate a control authority (the point of challenge) from the skills needed to challenge unless your goal is to pay oodles of cash to outside vendors for poor delivery. I can see the benefit for a consultancy delivering services but not to a Government serving the public interest.


THE COMMERCIAL RISK: 
“GDS’s preference for input based commercial arrangements rather than a more traditional outcomes-based commercial approach”

First, as someone who created outcome based models for development a decade ago then I can clearly state this is not traditional unless the outcome is delivery to a specification document. This is an important distinction to understand. 

One of the key focus of GDS has been on user need i.e. identifying the volume of transaction Government has, identifying the user needs of those transactions and building to meet the user need. This is a huge departure from the past model where the user need was often buried in a large specification document and the the goal was delivery to the specification whether it met user needs or not. So, you first need to ask which outcome are you focused on - user need or delivery to a specification?

When you are focused on user need, you soon realise you’ll need many components to build that user need. Some of the components will be novel and some will be industrialised (i.e. commodity like). The methods and techniques you will use will vary. I could give examples from the Home Office and others but I’ll use an example map from HS2 (high speed rail) to highlight this point.

Example map


The user need is at the top. There are many components. The way you treat them will be different according to how evolved those components are. This sort of mapping technique is becoming more popular because it focuses on efficient provision of user needs. Doing this involves multiple different types of inputs from products to utility services to even custom built components and applying appropriate methods.

Now, in the traditional approach which is building to a specification then there is usually very little description of the user need (or where it exists it’s buried in the document) and almost certainly no map. This delivery mechanism normally involves a very structured method to ensure delivery against the specification i.e. the focus is not “did we deliver what the user needed” but “did we deliver what was in the specification / contract”. Consultants love this approach and for good reasons which I'll explain. 

Take a look at the map from HS2 again. Some of the components are in the uncharted space (meaning unknown, novel, constantly changing) whilst others are more industrialised (well defined, well understood, common). Whilst the industrialised components can be specified in detail, no customer can ever specify that which is novel and unknown. Hence, we tend to use methods like six sigma, detailed specifications, utility services and outsourcing for the industrialised components of the project but at the same time we use agile, in-house development for the novel & unknown.

Oh, and btw the maps I use are a communication tool between groups. With the sort of engineers you have a GDS and other Depts then this sort of thinking is often just second nature. You use commodity components / utility services and products where appropriate. You build only what you need and you use the right approaches to do so.

The beauty of forcing a specification document on everything is you force the customer into trying to treat all the components as the same, as though everything is industrialised. You are literally asking the customer to specify the unknown and then you crucify them later on through change control costs. The vendor can always point the finger and blame the customer for “not knowing what they wanted” but then the reality is they couldn’t know. The massive cost overruns through change control are not the fault of change but instead the structured process and the use of specifications where not appropriate.

Hence you have to be really careful here. If someone is asking you to sign up to an outcome based traditional model which in fact means delivery against a defined specification document for the entirety of a large complex system using a very structured process THEN you’ll almost always end up with massive cost overruns and happy vendors / consultants.

I have to be clear, IMHO this is scam and has been known about for a long time.

So which way does the report focus? The reports talks about documentation, highlighting the example of MPA and promotes pushing control to CCS (Crown Commercial Services). Hence we can be pretty confident that this will break down into specification documents. It argues “While GDS focuses on embedding quality staff within programmes, MPA pursues more formalised and documented processes” and then it promotes the view of MPA as the solution.

This argument is not only wrong, it is mischievous at best. GDS focuses on user needs and using high quality staff to build complex projects. It does a pretty good job of this and its output is functioning systems. MPA focuses on ensuring the robustness & soundness of projects that are undertaken. It does a pretty good job of this and its output is formal documents. You can’t say “they write documents, we like specification documents and therefore you should use those sorts of documents” as the context is completely different.  Some parts of a large complex projects can and should be specified because they are known. Others parts are going to have to be explored. Some parts will need an outcome based approach. You're going to need good "quality" engineers to know and do this along with specialists in procurement to support.

The report then adds another twist - “As a matter of urgency, in order to manage commercial risk, all commercial activities within GDS should be formally passed over to the newly transformed Crown Commercial Service (CCS)”. Let us be clear on what this means. In all probability, we're going to end up forcing specification documents (an almost inevitable consequence of trying to get 'good value' from a contract) even where not appropriate and hand it over to procurement specialists who are unlikely to have the necessary engineering skills to challenge what the vendors say. This is exactly what went wrong with the past.

IMHO, a more honest recommendation would be “As a matter of urgency, Gov should start handing over large wads of cash to us, preferably in a truck”.

For reference, if you want to know how to deal with a complex system then once you have a map, I find the following a useful guide. Please note, that for both methods and procurement techniques then multiple methods are needed in a large complex system. This is also another reason why you map in order to break complex systems into components to treat them effectively. I cannot reiterate how important it is to have purchasing specialists supporting the engineering function. You don't want to lose those skills necessary to challenge. NB the diagram is not a replacement for thought, it's just a guide.

Methods & Purchasing.



THE EFFICIENCY RISK:
“With a monopoly position and a client-base compelled to turn to GDS for advice, there is a risk that they could become an inefficient organisation”

Should we roll the clock back and see what it was like before GDS and talk about inefficient organisation? I think Sally Howes, the NAO's executive leader, sums it up very politely with the statement “the government, Parliament and my own organisation, the NAO, were very aware of how the old fashioned world of long, complex IT projects limited value for money”. 

To put it bluntly in my language, we were being shafted. We're nowhere near the end of the journey with GDS and the report completely ignores how Departments are adapting and growing capabilities. There's not much I can find to like in the report, some bits did make me howl though.

I loved the use of “proven methods” in the paper followed by “excellent opportunity for CCS to show that it can meet the needs of a dynamic buying organisation”. So basically, we believe in evidence and because of that statement we recommend you experiment with something unproven and smells a lot like the past? Magic.

However it is only surpassed by “This paper has no evidence to suggest that GDS is too big or too expensive to achieve its aims” which followed a rant on “ Is this meeting the needs of the government departments or is this excessive? Are they the right staff? Are they being paid enough? Do they have the appropriate skills?”

That’s consultant gold right there. I’m going to create a whole bunch of doubts about a problem I’ve no evidence exists in order to flog you a solution you probably don’t need. Here, have my wallet - I’m sold!

The paper then goes on to talk about “To ensure market-driven efficiency of the remaining advisory function, this paper recommends that the advisory function form a joint venture with the private sector, allowing it to grow fast and compete for work alongside other suppliers”. Hang on, we have G-Cloud, we have GDS, we have growing Departmental skills and we should hand advisory to the private sector because it previously provided “limited value for money”? 

I’m guessing they are after more than one truck load of cash. I’m pretty sure this isn’t the “high level vision of the future” that the Government is after.

Now don't take this post to mean that GDS is perfect, far from it. There’s plenty of good discussion to be had about how to make things better and about how departments can provide services to other departments. There has been some misinterpretation (e.g. the Towers of SIAM) and there has been some oversteering (e.g. a tyranny of agile) but that’s normal in such a complex change. The achievements already have been pretty remarkable but no-one should be under any illusion that it can’t be better. It can.

However reasonable discussion or debate doesn't involve a consultancy publishing a report flogging a bunch of dubious and outdated methods - let’s take skill away from challenge, lets hand over advisory to private sector, let’s focus on specification documents - as solutions to risks which aren't even quantified. There's nothing to debate, it's just mudslinging. I'm guessing that's why they published it at a time when no-one could respond.

But what about the motivations of the authors? I see one is a head of government consultancy practice and so is the other. I’m guessing they’re hoping to be on the advisory board and paid handsomely for such pearls of wisdom. 

I note that Andy Mahon has “wide experience in public sector procurement” gained from his 28 years at BDO, Grant Thornton, KPMG and Capita covering initial business case to PFI. I’m not convinced that someone with so much experience of flogging to Government and working for a consultancy flogging to Government can ever be considered impartial when it comes to advising Government on how not to be flogged.

Now Jack Perschke is a different matter. A long background in different areas plus also he worked for ICT reform group and was a Programme Delivery Director for Student Loans Company Transformation Programme. Well, this report is a bit odd - given his background.

From the minutes of the Student Loans Company (though Jack had just left), the board even took time to praise GDS noting “the engagement with Government Digital Services (GDS) had been very helpful” and “GDS had improved the understanding of the work required, particularly around the build/buy options”.  Further minutes talk about ongoing discussion, challenge and support e.g. from “responding to the conditions set by the Government Digital Dervice (GDS), including the benchmark for Programme costs” to the Board noting that "GDS were a key partner in the Programme“. 

Surely this is how things should work? I’m surprised Jack Peschke didn’t see that. I can't see how you'd conclude this was a bad thing. 

Well, if there is some good to come from the document, some silver lining then IMHO this document provides further indirect evidence of why Government should develop its own capability, skills and situational awareness throughout GDS and the departments. These sorts of reports and outside consultancy engagements rarely bring anything of value other than for the companies writing them.

I think my “major risks which need to be sorted” is code for “pay us lots of money” is about spot on. 

I'll come back to this next week as I want to see what else crawls out of the woodwork here. I don't like civil servants being attacked especially by self interested outside consultants at a time when civil servants can't respond.


Your Password is Too Damn Short by Jeff Atwood

I'm a little tired of writing about passwords. But like taxes, email, and pinkeye, they're not going away any time soon. Here's what I know to be true, and backed up by plenty of empirical data:

  • No matter what you tell them, users will always choose simple passwords.

  • No matter what you tell them, users will re-use the same password over and over on multiple devices, apps, and websites. If you are lucky they might use a couple passwords instead of the same one.

What can we do about this as developers?

  • Stop requiring passwords altogether, and let people log in with Google, Facebook, Twitter, Yahoo, or any other valid form of Internet driver's license that you're comfortable supporting. The best password is one you don't have to store.

  • Urge browsers to support automatic, built-in password generation and management. Ideally supported by the OS as well, but this requires cloud storage and everyone on the same page, and that seems most likely to me per-browser. Chrome, at least, is moving in this direction.

  • Nag users at the time of signup when they enter passwords that are …

    • Too short: UY7dFd

    • Lack sufficient entropy: aaaaaaaaa

    • Match common dictionary words: anteaters1

This is commonly done with an ambient password strength meter, which provides real time feedback as you type.

If you can't avoid storing the password – the first two items I listed above are both about avoiding the need for the user to select a 'new' password altogether – then showing an estimation of password strength as the user types is about as good as it gets.

The easiest way to build a safe password is to make it long. All other things being equal, the law of exponential growth means a longer password is a better password. That's why I was always a fan of passphrases, though they are exceptionally painful to enter via touchscreen in our brave new world of mobile – and that is an increasingly critical flaw. But how short is too short?

When we built Discourse, I had to select an absolute minimum password length that we would accept. I chose a default of 8, based on what I knew from my speed hashing research. An eight character password isn't great, but as long as you use a reasonable variety of characters, it should be sufficiently resistant to attack.

By attack, I don't mean an attacker automating a web page or app to repeatedly enter passwords. There is some of this, for extremely common passwords, but that's unlikely to be a practical attack on many sites or apps, as they tend to have rate limits on how often and how rapidly you can try different passwords.

What I mean by attack is a high speed offline attack on the hash of your password, where an attacker gains access to a database of leaked user data. This kind of leak happens all the time. And it will continue to happen forever.

If you're really unlucky, the developers behind that app, service, or website stored the password in plain text. This thankfully doesn't happen too often any more, thanks to education efforts. Progress! But even if the developers did properly store a hash of your password instead of the actual password, you better pray they used a really slow, complex, memory hungry hash algorithm, like bcrypt. And that they selected a high number of iterations. Oops, sorry, that was written in the dark ages of 2010 and is now out of date. I meant to say scrypt. Yeah, scrypt, that's the ticket.

Then we're safe? Right? Let's see.

You might read this and think that a massive cracking array is something that's hard to achieve. I regret to inform you that building an array of, say, 24 consumer grade GPUs that are optimized for speed hashing, is well within the reach of the average law enforcement agency and pretty much any small business that can afford a $40k equipment charge. No need to buy when you can rent – plenty of GPU equipped cloud servers these days. Beyond that, imagine what a motivated nation-state could bring to bear. The mind boggles.

Even if you don't believe me, but you should, the offline fast attack scenario, much easier to achieve, was hardly any better at 37 minutes.

Perhaps you're a skeptic. That's great, me too. What happens when we try a longer random.org password on the massive cracking array?

9 characters2 minutes
10 characters2 hours
11 characters6 days
12 characters1 year
13 characters64 years

The random.org generator is "only" uppercase, lowercase, and number. What if we add special characters, to keep Q*Bert happy?

8 characters1 minute
9 characters2 hours
10 characters1 week
11 characters2 years
12 characters2 centuries

That's a bit better, but you can't really feel safe until the 12 character mark even with a full complement of uppercase, lowercase, numbers, and special characters.

It's unlikely that massive cracking scenarios will get any slower. While there is definitely a password length where all cracking attempts fall off an exponential cliff that is effectively unsurmountable, these numbers will only get worse over time, not better.

So after all that, here's what I came to tell you, the poor, beleagured user:

Unless your password is at least 12 characters, you are vulnerable.

That should be the minimum password size you use on any service. Generate your password with some kind of offline generator, with diceware, or a passphrase approach – whatever it takes, but make sure your passwords are all at least 12 characters.

Now, to be fair, as I alluded to earlier all of this does depend heavily on the hashing algorithm that was selected. But you have to assume that every password you use will be hashed with the lamest, fastest hash out there. One that is easy for GPUs to calculate. There's a lot of old software and systems out there, and will be for a long, long time.

And for developers:

  1. Pick your new password hash algorithms carefully, and move all your old password hashing systems to much harder to calculate hashes. You need hashes that are specifically designed to be hard to calculate on GPUs, like scrypt.

  2. Even if you pick the "right" hash, you may be vulnerable if your work factor isn't high enough. Matsano recommends the following:

    • scrypt: N=2^14, r=8, p=1

    • bcrypt: cost=11

    • PBKDF2 with SHA256: iterations=86,000

    But those are just guidelines; you have to scale the hashing work to what's available and reasonable on your servers or devices. For example, we had a minor denial of service bug in Discourse where we allowed people to enter up to 20,000 character passwords in the login form, and calculating the hash on that took, uh … several seconds.

Now if you'll excuse me, I need to go change my PayPal password.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.


Can nuclear waste help humanity reach for the stars? by The Planetary Society

With the shortage of plutonium-238 to power space missions, Europe has decided to focus on an accessible alternative material that could power future spacecraft: americium-241.


April 23, 2015

We Need To Talk About Joss Whedon. by Feeling Listless

Film Yes we do. Hey Joss. Thanks Joss. Somehow in the midst of everything you still managed to create in MARVEL's The Avengers: Age of Ultron something which is comprehensively, comprehendingly and colossally a Joss Whedon film, tonally and philosophically different to the other films in the MCU franchise and with all the Whedoneque stuff which permeates all of your work.  Not that you're reading this, but thanks all the same.  It's just the pick me up I needed.  Now for the rest of you here's a big long list of discussion points which is full of spoilers so should be avoided if you haven't seen MARVEL's The Avengers: Age of Ultron yet.

(1)  There is no end of credits sequence.  There's a bit after what would have been the opening credits sequence in the olden days which now seems to be slapped on the end of films now with the actors names and so forth but no, as Joss and Kevin have widely publicised in interviews there is no Shawarma II.  Not that this didn't stop the twelve of us in screen one at FACT's Picturehouse this morning sitting all of the way through the credits anyway.  Just in case.

(2)  Can we stop with the creating of so many brilliant characters who we know will and can only have a limited amount of screen time?  Elisabeth Olson's Scarlet Witch is magnificent creation and although she'll apparently be turning up in Captain America: Civil War (along with pretty much everyone left on Earth at the end of The Avengers) at close, as with Black Widow as with Hawkeye as with the Ruffalo Hulk, you really, really want to see them in their own film.  Or television series.  Or whatever.

(3)  Something the film does especially well is in foregrounding the characters who don't have their own film franchises without really short changing those who do.  In the first film, Joss and the gang quite rightly put their stall in the excitement of seeing Iron Man, Thor and Cap in the same film together.  That's especially true of Jeremy Renner whose distaste for how he was used in the first film actually becomes a plot point in the second.  Giving him a wife and family grounds him and also makes him the heart and humanity of the team putting him in line with Xander or Cordelia.  Plus there's the appearance of Julie Delpy (goodness) in what's essentially a version of Jeanne Moreau's character in Luc Besson's Nikita during Romanov's dream/backstory.

(4)  Rolling Stone has a good interview with Joss about making MARVEL's The Avengers: Age of Ultron where he eludes to a slightly manic editing process.  I think you can tell.  It is a film in which potentially useful character moments and exposition aren't there.  Apparently the original cut was about three hours and it's not so much that anything is underdeveloped but some of the pacing is all over the place.  Just every now and then I wished it would stop so we could see more of something (see (2) above).

(5)  Isn't it roughly the same plot as Buffy's Once More With Feeling?  Which I'm about to spoil?  At the end of Once More With Feeling it becomes apparent that all the singing and dancing and death and the emergence of Sweet is as a result of Xander dabbling in magic.  As he says, "I didn't know what was going to happen.  I thought there would be dances and songs.  Just wanted to make sure we'd work out, get a happy ending."  In MARVEL's The Avengers: Age of Ultron, Stark and Banner meddle with science which results in action and death and the emergence of Ultron for similar reasons.  They too don't really know what's going to happen.  In both cases the super teams are actually spending most of the story clearing up a mess made by one of their own number.

(6)  If there's a problem, and this has nothing to do with MARVEL's The Avengers: Age of Ultron, it's that it doesn't especially change any games in the same way as MARVEL's Captain America: The Winter Soldier or MARVEL's The Guardians of the Galaxy or indeed the first film.  Although the destruction of the Hydra base may have some effect on Agents of SHIELD (is Henry Goodman's Dr List dead now?) there's a business as usual feel to the thing.  Of course, it's amazing business and it is important to pace yourself.  But we're being wowed in the Iron Man 3 or Thor: The Dark World sense of the word.

(7)  For all Joss has said about the film being complete in and of itself, it does still quite rightly feel like a middle film and also an "episode".  Plenty of the story is as a result of the first film and there's a lot of 'splaining ready for Infinity War and also foreshadowing for future other installments of the franchise, not least the fractures in Rogers and Stark's relationship and that slightly odd moment in the middle when Thor buggers off and picks up Selvig so he can go and stand in a magical cave pool.  There's a lot of trust put in the audience here that we understand the language of these MCU films now, the interconnectedness and that we're willing to going along with these narrative detours.

(8)  What happens after 2019?  It's a bit soon of course, but pretty much everything in all of these films since 2008 is leading up towards Thanos presumably visiting Earth with that glove (probably chased by The Guardians of the Galaxy for measure).  My guess is that MARVEL's hedging.  It knows that every genre has a cycle and that however much money these films are making now fatigue will set in, especially with so many, what could be deemed, third string characters in the Third Wave.  In that case the Infinite War films could become the massive finale wrap up for the franchise as is if need be.  They'll certainly presumably be the last of The Avengers films although ...

(9)  Anyone know why the film has two composers?  Brian Tyler's score has been augmented by Danny Elfman or vis versa.  Was Elfman's contribution it just to rework Silvestri's The Avengers theme?

(10)  Did anyone else with a like mind think of Doctor Who's The Sontaran Stratagem when SHIELD's Helicarrier put in its appearance?


Pick a course, adapt as needed. by Simon Wardley

Ok, a bit of history to begin with. When I took over running Fotango (a Canon Europe subsidiary), it was a loss making organisation. It took me a year to make it profitable. We grew the business by taking our skills and applying them to relevant areas. In the end we were managing, developing and operating over a dozen major systems with millions of users.

However, we had constraints. The two most challenging of which were head count and profitability. We had to operate on a basis of no head count increase (this was due to a parent wide rule) which forced us to automate more, re-use and find ways to create space for development. The second constraint was we had to be profitable - every month. The later is a real headache when you have millions in the bank but can't invest. Any investment we wanted to make had to come through operational efficiency which in no small part was why we end up implementing some of the first web based private infrastructure as a service, auto configuration, continuous deployment and self healing tools between 2003-2005. 

In the board room, James and I used a map to determine where we could attack, to plot our path. I've taken a version of that map and rolled it forward to mid 2007 in order to illustrate some points. The map is provided in the following figure.


Now the map gives us position of things in value chain (from visible user need to hidden components) versus movement (i.e. how things evolve). On this map is one of the lines of business we had.

From the map, there are several points we could attack.

Point 1 - Attack compute provision as a utility. We actually had a system called Borg which ran our private IaaS. We had offered this to other vendors and planned to open source it later in 2007. Whilst we couldn't build a public IaaS (due to the capital investment required and the constraints we had), that didn't mean we didn't want to see a fragmented market of providers in this space.

Point 2 - Attack platform provision as a utility. We had actually embarked on this route based upon the earlier maps and launched the first public platform as a service known as Zimki. We had all the capabilities necessary to build it and back in 2005 we had anticipated someone else would launch a public IaaS. I thought it was going to be Google, turned out to be Amazon. The importance of a public IaaS for us is it would get over our investment constraint. We planned to open source the space in late 2007, had the components for an exchange and a rapidly growing environment etc. The play itself was almost identical to Cloud Foundry today.

Point 3 - Attack CRM as a service. We had looked at this in 2005, decided we didn't have the skills and others were moving into the space.

Point 4 - Attack Apps on Smart Phones. Back in 2004 we were working on mobile phones as cameras, however there was no way to anticipate the development of the iPhone. In 2007, we might have made a play in this space based upon past skills but we had effectively removed those parts of the value chain from the organisation. We had to concentrate on somewhere in 2005, we had the constraint of resource growth, we had to make a choice. That choice was the platform play. But in 2007, it could be an option.

Point 5 - Build something new. We certainly had the capability to experiment, we used hack days and other tools to come up with a range of marvellous ideas. However the resource constraint meant we needed to industrialise the platform and get ourselves and others to build on top of this. We could use ecosystem effects to therefore sense and identify future success.

Now, I've simplified a lot of the thought processes along with the actual map, but the point I want to make is that we had multiple points of attack - the WHEREs.  The WHY was a discussion over which we could exploit given our constraints such as resource & investments along with capabilities. This gave us our DIRECTION. 

Each node or point on a map actually breaks down into a more complex map of underlying components. Some of those were novel (the uncharted) and some were more commodity (the industrialised). We knew how to apply multiple methods (agile, six sigma etc) appropriately, how to build and exploit ecosystems and a vast range of tactical games we could use. 

However, once we determined our DIRECTION, we moved deliberately along that path. Yes, we had very fast deployment and development cycles. Multiple builds in a single day to live for components was nothing special. However, that tempo wasn't uniform. Releases in the uncharted space would happen continuously. In the more transitional space (between uncharted and industrialised) it slowed down considerably and by the time you reached industrialised then releases could be monthly, much more regimented. We had been running as an API shop since 2003 and we had long learned the lesson that you couldn't go around changing the APIs of the deep underlying components ten times a day without causing friction and cost in the higher order systems.

This is why those more industrialised, lower order components we'd look to move to outside and stable utility providers. Unfortunately, though we anticipated their development, none existed in 2-2004 - 2005. There wasn't an Amazon, Scalr, RightScale, Chef or any of the other infrastructure, management, configuration and monitoring environments. We had to build all this just to get to the platform layer and our speed depended upon stability of lower order interfaces. 

Take something today like Netflix. They could not have existed if Amazon changed the core APIs of EC2 twenty to thirty times a day. Stability of the interfaces of lower orders is critical for development of higher order - this is basic componentisation. 

Now Fotango's story ended to due to a sorry tale of strategic advice which is why you often find me in conferences throwing rubber chickens with the words "Situational Awareness" at big name consultancies as they mumble "blah, disruption, blah, digital, blah, cloud, blah, ecosystem, blah, innovation, blah". Especially at Big Data conferences where they seem to gather to flog blobs of "wisdom" to the unsuspecting masses.

However, there are some things I do want to remind people of.

Have a DIRECTION. 
This is one of the most important parts of mapping & improving situational awareness. You not only need to learn to use multiple methods (e.g. agile, lean and six sigma) but you also need to understand the landscape and steer your way through it. Maps are dynamic and yes, sometimes you have to pivot based upon changing conditions. However, Agile is not a solution for an indecisive and variable management. When moving in uncharted space you steel need a DIRECTION and adapt to what you discover. You don't need a captain who can't keep a decision for more than five minutes without changing. If the reason you're using Agile is because your manager is going Fire! Aim! Change Course! Don't Fire! Did we Fire? Fire! No, Don't Fire! Change Course! Change Course! Don't Fire ... wait .. FIRE! ... No! Change Course! Then you've got bigger problems than methods.

Move APPROPRIATELY fast.
Yes, continuous release processes are great for exploring uncharted spaces and building higher order systems. However, you need stability of interfaces at lower order systems (which includes not just syntax but semantics). For anyone who doesn't understand this, hire a crew of electricians to replace all the sockets in the buildings & data centre with sockets and transformers to supply power equivalent to a different region. Call it a 'release' and watch the expressions of horror when nothing works / plugs in. After they scream murder at you and finally get around to setting some stuff up, send your electricians around to replace it all with another region. Do try shouting at everyone that you were only being adaptive & innovative whilst they beat you with their dead computers.

Focus on USER needs
That's the first step of mapping and hardly worth repeating because you should be doing this. Of course, if you weren't actually doing this you might run around changing plug sockets to a different region. Ditto some of the changes I see in the online world.

Before anyone says "Oh but we can make special adaptors to cope with the change" which invariably leads to a host of different competing standards and then someone creating the standard of standards ... just give up.

Use APPROPRIATE methods
I'll use one diagram and go - enough said.


If anyone feels like going 'Have you considered using dual operating / twin speed IT / bimodal" or any of the other "organise by the ends" brigade. Don't even go there.


How to draw a dragon using your Dragon 32. by Feeling Listless

That Day It's St George's Day and a twittervesation has just reminded me of the old Marshall Cavendish part collection, INPUT Magazine which was all about how to programme 8-bit computers using BASIC and after about two seconds of looking at the scans on the Internet Archive, found this article about drawing a dragon sprite. They're from the third issue.  Click on them so you can read what they say should you want to:



Must have been a real pain to be a TANDY user and have to make those sorts of modifications each and every time.


Art of the Title on Orphan Black. by Feeling Listless

TV Good lord, I hope BBC UK hurry up and announce the broadcast:

"It was a bit overwhelming at first, trying to create a concept that would complement the show in a cool way. I tried to keep certain guiding principles in mind during development. How to create a pretty blossoming flower based on fungus, was the first thing that jumped out at me. I felt this image was important to convey, for it was symbolic of Tatiana."


Spider-rights. by Feeling Listless

Film Animated Spider-Man. At a moment when it looked like everything was becalmed in the MCU in relation to Spider-rights, Sony have announced an animated Spider-Man film for release in 2018. My initial thought was "Lego!" after seeing Phil Lord & Christopher Miller were involved, then I thought it might be a way to wrap up their version and Andrew Garfield's Peter Parter's story. But there's a sentence in the press release which is supposed to be a denial but opens up an intriguing prospect:

“The film will exist independently of the projects in the live-action Spider-Man universe, all of which are continuing.”

Which projects?

Does this mean we might still see a live action Spider-Man film with Garfield et al alongside the MCU version? Or was the press release prepared before the MCU announcement and what this should read is something along the lines of "live-action Spider-Man in the MCU universe" or some such.

Kevin Feige's name doesn't appear anywhere on this film which has roughly the same executive team as the Garfield films which is also rather confusing, just as the project will doubtless create confusion when its released in three years time.  "So is this Thor going to be in this?" that sort of thing.

Odd.


“Your heart sounds just fine, PSO J334.2028+01.4075″ by Astrobites

If we read the light curve (its brightness versus time) of a quasar (that is, a supermassive black hole with a jet) like an electrocardiogram, we’d conclude that lots of quasars are having heart attacks. Their signals vary in brightness randomly, like the beats of an arrhythmic heart. The randomness of their emission may be related to their central supermassive black holes, which are surrounded by blobs of gas, dust, and stars, accreting onto the black hole at irregular intervals. The astronomers of today’s paper, however, found a quasar with a regular heartbeat. Quasar PSO J334.2028+01.4075 has a very healthy heart rate of 6.7 beats per decade, or once every 542 days. One explanation is that this guy hosts a pair of supermassive black holes. If true, then the astonishing interpretation of this quasar’s heart rate is that its black holes are only a few orbits away from merging! How did they catch their patient at such a critical stage?

Figure 1. Surface brightness of an accretion disk surrounding a supermassive black hole binary from a simulation by Farris et al. 2014. Notice the variations induced by the black hole orbits.

In fact, Liu et al. didn’t just stumble upon this quasar. They went looking for it. Here was their train of reasoning: Deep surveys reveal lots of galaxies merging long ago. After a galactic merger, the two central black holes (every big galaxy has one) will migrate to the new galaxy’s center, begin to orbit closely, and drive periodic accretion. Recent simulations reveal that a quasar periodically accreting like this could be visible as a sinusoidally-varying light curve. You can see waves of matter accreting onto the binary black hole in one such simulation in Fig. 1.

If the binary is very near the end of its life, when gravitational radiation begins to drive its rate of inspiral, its orbital period will be of the order of years (that is, for billion-solar-mass black holes—a binary’s inspiral timescale increases linearly with its total mass). In fact, several candidate supermassive black hole binaries have already been identified, for example as two bright spots in the center of a quasar. But these black holes are separated by thousands of parsecs, perhaps not even gravitationally bound to one another. Liu et al. were searching for a pair of supermassive black holes headed doggedly toward merger, not flirting at thousands of parsecs. So they rolled up their sleeves and dug through a multi-year survey of a small patch of sky, looking for a quasar with a light curve rising and falling once every few years.

Liu et al. belong to the Pan-STARRS collaboration, involved in rapid optical surveys of much of the sky since 2010, looking for things that flicker and blink. To fulfill one of their projects, the Medium Deep Survey, they observed ten patches of sky daily in five different colors. If you’re looking for quasars with light curves that oscillate on a timescale of years, this is the perfect data set to chew on.

Figure 2. Color-color diagram of all the point sources in the field of view. A relative color-magnitude scale is given on each axis, defined for any given point by subtracting its brightness in one color filter from its brightness in some other color filter (more negative means brighter). The horizontal-axis is “ultraviolet-ness”, and the vertical-axis is “green-ness”. Thus objects with relatively more light at high energies (like quasars) are clustered in the bottom left.

The color data were helpful in gleaning a subset of potential quasars from all the bright dots in their field of view. The astronomers particularly wanted to avoid misidentifying a variable star (like an RR Lyrae) as a periodically-varying quasar. At optical wavelengths, quasars actually look a lot like stars, which is how they earned their name. But they are relatively brighter at shorter wavelengths than stars are. Liu et al. used a color-color filter, represented in Fig. 2, to select 316 candidate quasars.

Then they identified a subset of 168 quasars with large variations in brightness over the four years of data. Since they were on the hunt for periodic variations, they ran a Fourier analysis separately on each color channel of these light curves, looking for sinusoidal variations. They found 40 quasars with regular heartbeats visible in two or more color channels. Because the baseline of their observation was only four years, their search was only sensitive to periods less than that. In this paper, they present their most significant detection, PSO J334.2028+01.4075. Lightcurves for this quasar in each of four color channels are shown in Fig. 3, folded over the 542 day period.

Figure 3. The brightness measurements of this quasar in each of the four observation bands. The best-fit sinusoid is overlayed as a dashed line.

Assuming the virial relationship between quasar spectral properties and black hole mass hold in this weird case with two black holes at the center, Liu et al. inferred that the total mass of the black holes is 10 billion solar masses. Then, identifying the period of the lightcurve with the orbital period of the binary, they calculated a black hole separation of 3-13 milliparsecs, or just 10 or so widths of the central black holes! If their interpretation of the light curve is correct, these central black holes are well on their way to merger, with only a handful of orbits left before that cataclysmic event.

The authors make sure to point out that there are other explanations. A single black hole could present a precessing jet, a little like a pulsar, which wobbles more or less in and out of view. But jet precessions usually occur on timescales of hundreds to millions of years. They also highlight a discovery announced in Nature only months before their own, of a similar periodically-varying quasar, with a period of five years. That supermassive black hole binary candidate, however, is lower-mass and nowhere near merger yet. That group also proposes alternative explanations, including hotspots in the accretion disk, or a precessing disk, but none are as simple as the supermassive binary black hole explanation.

Liu et al’s diagnosis of PSO J334.2028+01.4075 will be proven true or false within the decade. Alternatively, a more nuanced understanding of accretion onto binary black holes may contribute to a modified diagnosis. (“I’m terribly sorry about this, but we seem to have made an error. It appears you’ve got another billion years, PSO J334.2028+01.4075.”) But even at this early stage of discovery, Liu et al. have introduced an exciting new technique into time-domain extragalactic astronomy. Similar searches will easily be performed on larger surveys like those of the Large Synoptic Survey Telescope, uncovering thousands more periodic quasars, any one of them a potential host to a supermassive black hole binary on the verge of merger.


April 22, 2015

AWS to report by Simon Wardley

Many many years ago back in the days I worked at Canonical, I calculated a forward run rate for AWS. This was based upon existing analyst predictions of revenue, a period of exponential change (a punctuated equilibrium), some expectation of price elasticity and a lot of voodoo & jiggery-pokery. 

I said that eventually Amazon would have to report the AWS earnings (e.g. due to 10% reporting rules SFAS 131) though I expected this to be 2016. I would occasionally add on analyst predictions each year to confirm / deny the change but the problem was - no-one really had a clue. It was all speculation.

So looking at the model where did I have 1Q2015 pegged at? I had it pegged at a forward run rate of $2.38 billion per quarter. By the end of 2015, I was expecting Amazon to have annual forward run rate of  $16 billion p.a. and hence for each subsequent year to make more than $16 billion p.a. in revenue.

If you think this sounds an odd way of doing things - that's because it is a bit odd. The model is based upon a future test of a hypothesis that something is greater than a certain value rather than based upon trying to calculate what a value is at some specific point in time. There is a reason for this but it's rather obscure and not what is of interest.

Figure - Forward Run Rate


My interest is not so much in tomorrow's reporting (and I suspect there will be be gasps in some quarters) but in the subsequent quarters and the rate of change. My interest is in just how fast the punctuated equilibrium is moving.

I do get asked what do I think the revenue reported will be? I haven't got a clue.

If I took the forward run rates of the model for that quarter and the previous, by simply taking an average it would have revenue reporting at around $2.2 billion. But this ignores any variation due to price changes, any seasonality impacts (I really only concern myself with the magnitude of the annual figures) and given this model was written many years ago - it's based upon a lot of assumptions, actual revenues depend upon competitors action - then even if it's close that was just more luck than judgement.

I'll be happy if we're talking about AWS revenue using $Bn's because that at least demonstrates the change was not linear and the punctuated equilibrium is in full effect. Still, the waiting should be over. We should find out soon enough but I'll need a few more quarters of data to get a really clear picture.


My Favourite Film of 1999. by Feeling Listless



Film How often do you analyse films? I mean really analyse them with a pad and paper and enquiring mind? My film studies degree demanded this of me, especially my dissertation which as we've discussed, at length, investigated network narratives, ensemble pieces and hyperlink films of which Magnolia is a prime example.

One of the inherent structural problems with these films is in allowing their various characters to have complete stories. In some cases, usually if Robert Altman's directing, the strategy is not to bother, to demand that the audience fills in the blanks, force us to utilise our imagination to explain potential inconsistencies or time gaps.

But mainstream films won't allow this.  Mainstream storytelling wants, needs, complete stories with a beginning middle and a satisfying conclusion which often leads to systematic works in which narratives pile up on top of one another and in the case of Crash or Love Actually have multiple climaxes mechanically ramming into one another.

Characters are also often very insubstantial.  Because we're essentially watching a bunch of short stories edited together, they rely more than most on casting shorthand, Aston Kutcher playing the kinds of characters Ashton Kutcher usually plays.  Or Matt Dillon.  Or Sandra Bullock when she has her serious face on.

Magnolia sits somewhere in the middle of these extremes.  At three hours long it doesn't seem terribly mainstream and the casting with the exception of Tom Cruise doesn't either.  But director Paul Thomas Anderson still realises the inherent problems in the form and makes a single, magnificent leap to deal with it.

Knowing it was a key "text", I watched the film several times during that dissertation summer looking for various things, trying to recognise how the various characters interact, how their stories fit together.  Pages and pages and pages of notes all of which pointed to it being a classic of the form.

I'd remembered how substantial all the characters felt when I'd first seen the film and after viewing again, whilst mapping out the relationships, I noticed it again.  All of the characters had depth and none of the stories really felt short changed, with proper arcs and completely satisfying conclusions.

One the fourth pass, I realised why.  The middle hour of the film is only half an hour within the world of the film.  Or in other words, it takes us a whole hour to watch a half hour quiz programme.  Paul Thomas Anderson does the reverse of what's usually expected and slows down time within the world of the film.

Here's how I explained that within my dissertation:

"During the second act of Magnolia, Paul Thomas Anderson reorganises time to such an extent that the plot duration is actually slower than the screen duration.  With the intercutting of scenes, the moment when Jimmy asks Stanley to join him for the final round of the quiz lasts nearly five minutes and the closing titles for the television programme over three minutes.  This allows all of the plotlines to receive due attention, experimental editing actually defragmenting the narrative making it far more coherent than if the plotlines had been allowed to run in the usual causal manner, missing out those events that run in parallel.  The dissolution of these barriers, using avant-garde editing to clarify the narrative is another example of hyperlink cinema flirting with post-modern ideas."

"Plot duration" might need some explanation.  In film, narrative manifests itself in two ways.  The "story" is all the story that a narrative is about.  In a murder mystery this is everything from when the motive shows itself right through a conviction or not.  The "plot" are the sections of that which actually appear on screen.

In Magnolia, Anderson essentially shows us a bunch of scenes which are happening simultaneously one after the other doing what films rarely do but comics all the time, the "meanwhile" which allows him to show us the scenes which might otherwise have to be inferred later or explained in exposition.

Which is brilliant.  Brilliant.  Plus it's done in such a way as to be hidden from the viewer.  It's not obvious because in these scenes there isn't much cutting between characters and stories, everything is relatively self contained, keeping everything within separate worlds.

It's not until the credits roll on the quiz, once the final credit runs that everything speeds up again, and mayhem breaks loose with the frogs and guns falling from the sky and Amiee Mann and once you've noticed this, the whole film becomes even richer.  Oh and I first saw this at the Odeon on London Road should you be keeping track.


Signals from Hidden Dwarf Galaxies by Astrobites

Title: Beacons in the Dark: Using Novae and Supernovae to Detect Dwarf Galaxies in the Local Universe
Authors: Charlie Conroy and James S. Bullock
First Author’s institution: Dept. of Astronomy, Harvard University
Status: Accepted to ApJ

Dwarf galaxies are, as the name implies, the smallest of the galaxies. In our local neighborhood around the Milky Way, they range in size from the small Segue 2, with a mass of about 5 x 105  solar masses, to the Large Magellanic Cloud (LMC), with a total mass of about 1010 solar masses. Figure 1 shows an image of the LMC, and its companion, the Small Magellanic Cloud. Although we have a fairly good understanding of galaxy formation and evolution for massive galaxies (think the Milky Way and bigger), our understanding of these smallest galaxies is not nearly as complete. This is partly because these smaller, less luminous galaxies are challenging to observe at distances larger than a few megaparsecs (Mpc) from our Milky Way (the nearest massive galaxy to us, Andromeda, is about 0.77 Mpc away). Within that distance, we can detect them by resolving their stars. But for far away dwarf galaxies, an improved detection method to increase the number of observed dwarf galaxies would go a long way towards improving what we know of these small galaxies.

Figure 1: An optical image of the large and small magellanic clouds. (Source:

Figure 1: An optical image of the large and small magellanic clouds. These are both Local Group dwarf galaxies, and satellites of the Milky Way. (Source: ESO)

The authors of today’s astrobite propose a new method relying on novae and supernovae explosions to detect dwarf galaxies farther away than what is currently possible. Novae and supernovae are among the brightest astronomical events that we can observe. Novae occur in binary star pairs containing a white dwarf and a larger companion star. As gas accretes from the larger star onto the white dwarf, the hydrogen on the surface of the white dwarf eventually hits a critical mass, nuclear burning occurs, and a nova is produced. This is not to be confused with a Type 1a supernova, however, where the total mass of the white dwarf hits a critical limit (1.4 solar masses), and explodes. Both of these events are very luminous, but short lived. Detecting them is often a matter of getting lucky (looking in the right direction at the right time). The authors propose using current and ongoing surveys designed to detect these transient events to identify new and otherwise unobservable dwarf galaxies. (see this related astrobite discussing even another way to detect dwarf galaxies.) The authors use one of these, the planned Large Synoptic Survey Telescope (LSST), as a baseline to judge what will be observable in the near future.

The Faintest of them All

Figure 1: The surface brightness (top) and radius (bottom) of all known dwarf galaxies in our local neighborhood as a function of their absolute visual magnitude (bottom axis). This is converted to stellar mass in the top axis. In the top plot, the dashed line shows the detection limit of the LSST for galaxies where the individual stars cannot be resolved (i.e. for galaxies farther than about 3 Mpc from us). The dashed lines in the lower plot show what is observable

Figure 2: The surface brightness (left) and radius (right) of all known dwarf galaxies in our local neighborhood as a function of their absolute visual magnitude (bottom axis). This is converted to stellar mass in the top axis. In the left plot, the dashed line shows the detection limit of the LSST for galaxies where the individual stars cannot be resolved (i.e. for galaxies farther than about 3 Mpc from us). The dashed lines in the right plot show (roughly) the distances out to which galaxies of the given radius are resolvable by LSST. (Source: Conroy & Bullock 2015)

Figure 2 illustrates the difficulty in directly detecting dwarf galaxies. Shown is the surface brightness (left) and radius (right) of all known nearby dwarf galaxies as a function of the absolute visual magnitude of the dwarf galaxies (translated to stellar mass at the top axis). Most dwarf galaxies have been detected by resolving their individual stars. However, this can only be done out to a few Mpc; beyond this, they can only be detected if they are above a certain brightness. The dashed line in the left plot shows this limit for the faintest object the LSST can detect. In other words, anything below the dashed line can only be detected if it is within a few Mpc of us. Therefore there may be many galaxies that we simply cannot (yet) observe. In the right hand plot, the dashed lines give the farthest distance galaxies of the indicated size (0.1 kpc and 1.0 kpc) can be resolved by LSST (20 and 200 Mpc away respectively).

The authors argue that ongoing and upcoming surveys looking for transient events, like the LSST, will observe many novae and supernovae associated with undiscovered dwarf galaxies. In order to predict the likelihood of this occurring, and the types dwarf galaxies we may discover, the authors construct a model to predict the rate of novae and supernovae in dwarf galaxies. The authors take what we know about the distribution of dwarf galaxies, how their star formation rates and histories vary as a function of their stellar masses, and make some assumptions about how often novae, Type 1a supernovae, and Type II supernovae occur as a function of the star formation rate of the galaxy. Combining these factors, and noting the significant uncertainties in some of their assumptions, the authors predict rates of novae and supernova in dwarf galaxies as a function of the dwarf galaxy’s stellar mass.

Figure 2:

Figure 3: The predicted supernova and novae rates occurring in dwarf galaxies within a certain distance (horizontal axis) away from the Milky Way. The left and right plots show the predictions for two different assumptions on the relationship between stellar mass and dark matter mass in dwarf galaxies (the lines in the right plot are shifted upwards compared to the left). The lines in each plot show the rates for galaxies of three different stellar masses. The dotted portions of these lines show which of these galaxies are spatially resolvable, solid show unresolved but brighter than the limit in Figure 2, and dashed show those that are currently undetectable. Galaxies that fall in the gray and blue boxes can be detected via the LSST through novae and supernovae respectively (Source: Conroy & Bullock 2015)

This is shown in Figure 3 for two different assumptions on how the stellar mass of a dwarf galaxy scales with the dark matter mass of that galaxy. Shown is the supernova and nova rates in dwarf galaxies within a certain distance (horizontal axis) from us. The three lines in each show these rates for galaxies with stellar masses less than 108, 106, and 105 solar masses. The dotted (leftmost) portions of each line show distances galaxies of that mass can be spatially resolved (i.e. where we don’t need this new method), solid shows unresolved but a brightness greater than the limit given in Figure 2, and dashed are currently unobservable. Due in part to the “cadence” of the LSST (the length of the camera exposures and the frequency with which a given region is re-observed during the survey) there are limits as to which of these galaxies can be observed with the new method. The authors predict that galaxies which fall in the gray and blue boxes will be detectable by the LSST via novae and supernovae respectively.

Matching Supernova to Dwarf Galaxies

With their calculations, the authors conclude that the upcoming LSST should be able to detect 10-100 novae from dwarf galaxies with stellar masses of 105-106 solar masses every year out to about 30 Mpc, and 100-10,000 supernova in these galaxies every year. With this, the LSST can be used to detect many more dwarf galaxies than currently known. Even though they are so faint, once we have discovered where these dwarf galaxies lie, we can use focused follow up observations to observe them directly. With an increased sample of known dwarf galaxies stretching far from our Local Group of galaxies, we can better understand how galaxies form and evolve on the smallest scales.


Rosetta update: Two close flybys of an increasingly active comet by The Planetary Society

In the two months since I last checked up on the Rosetta mission, the comet has heated up, displaying more and more jet activity. Rosetta completed very close flybys on February 14 and March 28, taking amazing photos. But comet dust is making navigation difficult, so the mission is now keeping a respectful distance from the comet and replanning its future path.


Development of the OSIRIS-REx Sampling System: TAGSAM and the SRC by The Planetary Society

The OSIRIS-REx team has been busy assembling and testing the Touch-and-Go Sample Acquisition Mechanism (TAGSAM) and the Sample Return Capsule (SRC).


April 21, 2015

Gyorgy Kepes at Tate Liverpool. by Feeling Listless



Art My first encounter with a photocopier was at Tate Liverpool. It was during an school visit, when the education staff utilised various example of Manga, which was the comic trip ascendancy at the time, to illustrate how Roy Lichtenstein and the Pop artists chopped and changed and as we’d describe it now, mashed-up, various images and themes to create new images and themes. They demonstrated how the photocopier could isolate various colours, or reduce or expand images, edit together characters and frames to create new implications. For speed, this was usually done without the lid down, so you could see the giant strobe scanning light shift back and forth below the paper.

But I didn't stop there. I put my hand against the glass or my cheeks and which created strange human-like shapes against black backgrounds and for ages whenever I saw a photocopies, or scanner, I’d want to use it for something other than creating facsimiles of paper, for seeing how various objects looked when pressed against the glass how the light refracted against them in conjunction with one another and how they looked within the resulting imagine. To be honest, the results weren't ever that remarkable but every now and then there’d be some surreal or abstract image created in which that strobing light had hit something at an unusual angle and produced an attractive effect.

That’s presumably why out of the three exhibitions in Tate Liverpool’s current Surreal Landscape season (with Leonora Carrington and Cathy Wilkes), it’s Gyorgy Kepes I’m most drawn to. Back in 1937, the late Hungarian-born artist, designer and educator hit upon the idea for “photograms”, a sort of “camera-less” photograph in which images were developed in the dark room by as the press notes describe “arranging and exposing objects directly on top of light-sensitive paper; juxtaposing geometric, industrial and organic forms to create images that are poised between abstraction and representation.” Like me, he was interested in seeing how a model for capturing images reacted when faced with disparate objects. Unlike me, Kepes was an artist.

The exhibition includes eighty of Kepes’s photographs, photomontages and photograms from his Chicago period 1938-1942. After fleeing Nazi Germany in 1935, he settled in the city of big shoulders and became head of the Colour and Light department at the New Bauhaus School. Also during this period he wrote a book, Language of Vision, which was about his theories of how the new technologies of photography, cinema and television were having on visual culture. As this New York Times obituary notes, Kepes had a “long-held view that traditional art forms could no longer adequately speak to the problems of the modern world, a world too much conditioned, he believed, by chaos and alienation.”

If anything informs the work most, it’s the human eye. Although this features very specifically in two works, a photograph of an eye ultra close-up and a photomontage of various eyes from numerous sources, throughout the works are motifs of lenses and the mechanics of vision. Leaf and Prism exemplifies this, with its refraction patterns mimicking (albeit at the wrong angle) the veins of an organism which needs light to survive. There are also straight photographs of collections of objects, usually with an inventory of them for a title, Cone, Prism, Rock or Prism, Compass, Grid 2 in which the shapes of items rather than the items themselves which are important, how they merge into one another as we stare at them for longer than a glance.

As Kepes said himself (I’m quoting from in gallery text), “the master of nature is ultimately connected with the mastery of space; this visual orientation. Each new visual environment demands a reorientation, a new way of measuring”. We shouldn't look at all of these images in the same way. We have to recalibrate our expectations and perceptions as though we've never encountered something quite like that before. That’s important to keep in mind when encountering the exhibition. Few people stop to look at the close-up of an ear presumably because they've seen a few ears in their lives, but what is it about this ear? What’s important about this ear? What are its distinguishing features?

For old times sake and because there are a lot of images of what we must assume are Kepes's own hands in the exhibition, I decided to scan my own.  Using the HP desktop scanner next the computer I stood with the appendage on the glass and waited for the light to stutter across the glass trying not to more.  There isn't a black and white setting on the machine which is why its in colour.  I'd expected it to be surrounded in black but I think the grey area is actually the ceiling above my hand.  I suppose i'll have to take another scan with something pinned up there to check.  Which suggests that my experimentation days aren't over yet...

György Kepes is at Tate Liverpool from 6 March – 31 May 2015. Admission Free.


Devops ... we've been here before, we will be back again. by Simon Wardley

In this post I want to explore the causes of DevOps and how you can use such knowledge to advantage in other fields. I'm going to start with a trawl back through history and four snippets from a board pack in early 2007. This snippets describe part of the live operations of Fotango, a London based software house in 2006.

Snippet 1


We were running a private infrastructure as a service with extensive configuration management, auto deployment and self healing (design for failure) of systems based upon cfengine. We were using web services throughout to provide discrete component services and had close to continuous development mechanisms. In 2006, we were far from the only ones doing this but it was still an emerging practice. I didn't mention agile development in the board pack ... that was old hat.

Snippet 2


To be clear, we were running a private and a public platform as a service back in 2006. This was quite rare but still more of a very early emerging practice.

Snippet 3


In early 2007, we had switching of applications between multiple installations of platform as a service from our own private infrastructure as a service (Borg) to one we had installed on the newly released EC2. This was close to a novel practice.

Snippet 4


By early 2007 we working on mechanisms to move applications or data between environments based upon cost of storage, cost of transfer and cost of processing. In some cases it was cheaper to move the data to the application in other cases the application to the data. We were also playing some fairly advanced strategic games based upon tools like mapping. However, one of my favourite changes (which we barely touch on today) is when you had pricing information down to the function. This can significantly alter development practices i.e. we used to spend time focusing on specific functions because they were costly compared to other functions. You can literally watch the bill racking up in the real time billing system as your code was running and one or two functions always stood out. This always helps concentrate the mind and this was in the realm of novel practice in 2007.

Much of what we talk about regarding DevOps and the changes in practice today are not new. It is simply becoming good practice in our industry. For the majority of these changes, the days of novel and emerging practice have long gone. Many companies are however only just starting their journey and whilst most will get some things right - design for failure, distributed systems, use of good enough components, continuous deployment, compartmentalising systems and chaos engines - many are almost certainly doomed to repeat the same mistakes we made long ago - single size methods (agile everywhere), bimodal and API everything (some things just aren't evolved enough yet). Much of that failing will come from our desire to apply single methods without truly understanding the causes of change ... but we will get to that shortly.

The above is all perfectly normal and so is the timeframe. On average, it can take 20 to 30 years for a novel practice to become defined as a best practice. We're actually a good 10-15 years into our journey (in some cases more), so don't be surprised if it takes another decade for the above to become common best practice. Don't also be surprised by the clamouring for skills in this area, that's another normal effect as every company wakes up to the potential and jumps on it at roughly the same time. Demand always tends to outstrip supply in these cases because we're lousy at planning for exponential change.

However, this isn't what interests me. What fascinates me is the causes of change (for reasons of strategic gameplay). To explain this, I need to distinguish between two things - the act (what we do) and the practice (how we do stuff). I've covered this before but it's worth reiterating that both activities and practices evolve through a common path (see figure 1 & 2) driven by competition.

Figure 1 - Evolution of an Act


Figure 2 - Evolution of Practice


Now, what's important to remember is the practice is dependent but distinct from the act. For this reason practices can co-evolve with activities. To explain, the best architectural practice around servers is based upon the idea of compute as a product (the act). These practices includes scale up, N+1 and disaster recovery tests.  However, best architectural practice around IaaS is based upon the idea of compute as a utility i.e. volume operations of good enough components.  These practices includes scale out, design for failure and chaos engines. In general, best practice for a product world is rarely the same as best practice for a utility world.

However, those practices have to come from somewhere and they evolve through the normal path of novel, emerging, good and best practice. To tie this together I've provided an example of how practice evolves with the act in figure 3 using the example of compute. 

Now, normally with a map I use an evolution axis of genesis, custom built, product (+rental) and commodity (+utility). However practices, data and knowledge all evolve through the same pattern of ubiquity and certainty.  So on the evolution axis I could use :-

Activities : Genesis, Custom Built, Product, Commodity.
Practices  :Novel, Emerging, Good, Best
Data : Unmodelled, Divergent, Convergent, Modelled
Knowledge : Concept, Hypothesis, Theory, Accepted.

For simplicity sake, I always use the axis of activities but the reader should keep in mind that on any map - activities, practice, data and knowledge can be drawn. In this case, also for the reason of simplicity, I've removed the value chain axis.

Figure 3 - Coevolution of practice with the act


From the above, the act evolves to a product and new architectural practices for scaling, capacity and testing develop around the concept of a product. These practice evolve until they become best practice for the product world. As the underlying act now evolves to a more industrialised form, a new set of architectural practices appear. These evolve until they become best practice for that form of the act. This gives the following steps outlined in the above :-

Step 1 - Novel architectural practices evolve around compute as a product
Step 2 - Architectural practices evolve becoming emerging and good practice
Step 3 - Best architectural practices develop around compute as a product
Step 4 - Compute evolves to a utility
Step 5 - Novel architectural practice evolves as compute becomes a commodity and treated as a utility
Step 6 - Architectural practices evolve becoming emerging and good practice
Step 7 - Ultimately these good practices (DevOps) will evolve to become best practice for a utility world.

When we talk about legacy in IT, we're generally talking about applications built with best architectural practice for a product world. When we talk about DevOps, we're generally talking about applications built with best architectural practice for a utility world. Both involve "best" practice, it's just the "best" practices are different because the underlying act has evolved.

This process of co-evolution of practice with activity has occurred throughout history whether engineering or finance or IT. When the act that is evolving has a significant impact on many different and diverse value chains then its evolution can cause macro economic effects known as k-waves or ages. With these ages, new co-evolved practices emerge tend to be associated with new forms of organisation. Hence in the the mechanical age, the American System was born. With the electricity age, we developed Fordism. 

Knowing this pattern of change enabled me to run a set of population experiments on companies to confirm the model and identify a new phenotype of an emerging company form (the next generation) back in 2011. The results of which are shown in table 1.

Table 1 - Next generation vs Traditional organisations


It's precisely because I understood this pattern and how practices evolved that back in Canonical (2008-2009) we knew we had to attack not just the utility compute space but also the emerging practice space (a field which became known as DevOps). It was actually one of my only causes of disagreement with Mark during my time there as I was adamant we should be adopting Chef (a system developed by a friend of mine Jesse Robbins). However, Mark had good reasons to focus elsewhere and at least we could have the discussion.

When it comes to attacking a practice space then natural talent and mindset are key. In the old days of Fotango, I captured a significant proportion of talent in the Perl industry through the creation of a centre of gravity (a post for another day). It was that talent that created not only the systems but discovered the architectural practices required to make it work. Artur Bergman (now the CEO of Fastly) developed many of the systems and subsequently was influential in the Velocity conference (along with Jesse). Those novel practices were starting to evolve in 2008.

In the Canonical days, I employed a lesser known but highly talented individual who was working on the management space of infrastructure - John Willis (Botchagalupe). Again my focus was deliberate, I needed someone to help capture the mindset in that space and John was perfect for the role. I didn't quite get to play the whole centre of gravity game at Canonical and there were always complications but enough was done. John himself has gone on to become another pillar of the DevOps movement.

Now, this pattern of co-evolution of practice and activity repeats throughout history and we have many future examples heading our way in different industries. All the predictable forms of this type of change are caused by the evolution of underlying activities to more industrialised forms. For example, manufacturing should be a very interesting example circa 2025-2035 due to commoditisation of underlying components through 3D printing, printed electronics and hybrid printing enabling new manufacturing practices. It even promises an entirely new form of language - SpimeScript - which is why the Solid conference by O'Reilly is so interesting to me. Any early signs are likely to appear there.

It's worth diving a bit deeper into this whole co-evolution subject and for that I'm going to use Dave Snowden's Cynefin framework. For those who don't know this framework, I would suggest reading up on it.  In figure 4, I provided a general image to describe the framework.

Figure 4 - Cynefin.


CC3.0 SA by Dave Snowden

So let us go back in time to when the first compute products were introduced i.e. the IBM 650. Back then, there was no architectural practice for how to deal with scaling, resilience and disaster recovery. These weren't even things in our mindset. There was no book to read, there was no well trodden path and we had to discover these practices. What became obvious later was unknown, undiscovered and uncharted.

Hence people would build systems with these products and discover issues such as capacity planning and failure - we acted, we sensed and then we had to respond to what we found. We had to explore what the cause of these problems were and create models and practices to try and cope. Those practices were as emerging in the late 1960s, as the practices of Fotango were in mid 2000s. As our understanding grew of this space those practices developed. We built expertise in this space and the tools to manage this. We talked of bottlenecks and throughput and of N+1, of load and of capacity. We started to anticipate the problems before they occurred - running out of storage space became a sign of poor practice. We sensed our environment with a range of tools, we analysed for points of failure and we responded before it happened. Books were written and architectural practice became firmly in the space of the good. We then started to automate more - RAID, hot standby, clusters and endless tools to monitor and manage a complex environment of products (compute as services). Our architectural practice became best practice.

But as the underlying act evolved from compute as a product to compute as more of a commodity and ultimately a utility then the entire premise on which our practices were based changed. It wasn't about THE machine, it was about volume operations of good enough. We had to develop new architectural practices. But there was no book, no well trodden path and no expertise to call on. We had to once again use these environments, sense what was happening and respond accordingly. We created novel architectural practices which we refined as we understood more about the space. We learnt about design for failure, distributed systems and chaos engines - we had to discover and develop these. 

As we explored we developed tools and a greater understanding. We started to have an idea of what we were looking for. The practices started to emerge and later develop. Today, we have expert knowledge (the DevOps field), a range of tools and well practiced models. We're even starting to automate many aspects of DevOps itself. 

The point to note, is that even though architectural practice developed to the point of being highly automated, best practice and "obvious" in the product world, this was not the end of the story. The underlying act evolved to a more industrialised form and we went through the whole process of discovering architectural practices again. 

Now that change of practice (and related Governance structures) is one of the sixteen forms of inertia companies have to change. However because of competition dynamics, this change is inevitable (the Red Queen effect). We don't get a choice about this and that gives me an advantage. To explain why I'll use an example from a company providing workshops. 

The Workshop

This example relates to a company that provides workshops and books related to best practice in the environmental field. It's a thriving business which provides expert knowledge and advice (embodied in those workshops and books) about effective use of a specific domain of sensors. I have to be a bit vague here for reasons that will become obvious. The sensors used are quite expensive products but new more commoditised forms are appearing, mainly in Asia. At first glance, this appears to be beneficial because it'll reduce operating costs and is likely to expand the market. However, there is a danger.

To explain the problem, I'm going to use a very simple map on which I've drawn both activity and practice to describe the business (see figure 5)

Figure 5 - The Business



The user need is to gain best practice skill on the use of the sensors, the company provides this through workshops and associated materials such as books based upon best practice. Now the sensors are evolving. This will have a number of effects (see figure 6).

Figure 6 - Impact of the Change


From the above,

Step 1 : the underlying sensor becomes a commodity
Step 2 : this enables a novel practice (based upon commodity sensors) to appear. This practice will evolve become emerging and then good.
Step 3 : the existing workshop business will become legacy
Step 4 : a workshop business based upon these more evolved practices will develop and it's the future of the market.

This change is not just about reducing operational costs of sensors but instead the whole business of the company will alter. The materials (books, workshops, tools etc) that they have will become legacy. Naturally the company will resist this changes as they have a pre-existing business model, past revenues to justify the existing practices and a range of current skills, knowledge and relationships developed in this space.  However, it doesn't matter because competition has driven the underlying act to more of a commodity and hence a new set of practices will emerge and evolve and the existing business will become legacy regardless.

Fortunately this hasn't happened yet. Even more fortunately, with a map we can anticipate what is going to happen, we can identify our inertia, we can discuss and plan accordingly. We know those novel practices will develop and we can aim to capture that space by developing talent in that area. We know we can't write those practices down today and we're going to have to experiment, to be involved, to act / sense and respond.

We can prepare for how to deal with the legacy practices, possibly aiming to dispose of part of this business. Just because we know the legacy practice will be disrupted, doesn't mean others will and if we have a going concern then we can maximise capital by flogging off this future legacy to some unsuspecting company or spinning it off in some way. Of course, timing will be critical. We will want to develop our future capability (the workshops, tools, books and expertise) related to the emerging practice, extract as much value from the existing business as possible and then dump the legacy at a time of maximum revenue / profit on the market without the wider industry being aware of the change. If you've got a ticking bomb never underestimate the opportunity to flog it to the market at a high price. Oh, and when it goes off, don't miss out on the opportunity of scavenging the carcass of whatever company took it for other things of value e.g. poaching staff etc.

There's lots we can do here, maybe spread a bit of FUD (fear, uncertainty and doubt) about the emerging practices to compound any inertia that competitors have. We know the change is inevitable but we can use the FUD to slow competitors and also give us an ideal reason (internal conflict) for diversifying the business (i.e. selling off the future "legacy" practice). There's actually a whole range of games we can play here from building a centre of gravity in the new space, disposal of the legacy (known as pig in a poke), to ecosystem plays to misdirection.

This is why situational awareness and understanding the common patterns of economic change is so critical in strategic gameplay. The moves we make (i.e. our direction) based upon an understanding of the map (i.e. position and movement of pieces) will be fundamentally different from not understanding the landscape and thinking solely that commodity sensors will just reduce our operational costs. This is also why maps tend to become highly sensitive within an organisation (which is why I often have to be vague).

When you think of DevOps just don't think about the changes in practice in this one instance. There's a whole set of common economic patterns it is related to and those patterns are applicable to a wide variety of industries and practices. Understanding the causes and the patterns are incredibly useful when competing in other fields. 

DevOps isn't the first time that a change of practice has occurred and it won't be the last. These changes can be anticipated well in advance and exploited ruthlessly. That's the real lesson from DevOps and one that almost everyone misses.


Soup Safari #22: Spiced Roast Tomato with Carrot, Garlic & Borlotti Beans at Tate Cafe Liverpool. by Feeling Listless







Lunch. £3.50. Tate Cafe Liverpool, Albert Dock, Liverpool Waterfront, Liverpool L3 4BB. Phone: 0151 702 7400. Website.


The Last Reel: An Ode to 35-Millimeter Film. by Feeling Listless


The Lives of the Longest Lived Stars by Astrobites

  • Title: The End of the Main Sequence
  • Authors: Gregory Laughlin, Peter Bodenheimer, and Fred C. Adams
  • First Author’s Institution: University of Michigan (when published), University of California at Santa Cruz (current)
  • Publication year: 1997

Heavy stars live like rock stars: they live fast, become big, and die young. Low mass stars, on the other hand, are more persistent, and live longer. The ages of the former stars are measured in millions to billions of years; the expected lifetimes of the latter are measured in trillions. Low mass stars are the turtle that beats the hare.

red_dwarf_art

Figure 1: An artist’s impression of a low-mass dwarf star. Figure from here.

But why do we want to study the evolution of low mass stars, and their less than imminent demise? There are various good reasons. First, galaxies are composed of stars —and other things, but here we focus on the stars. Second, low-mass stars are by far the most numerous stars in the galaxy, about 70% of stars in the Milky Way are less than 0.3 solar masses (also denoted as 0.3M). Third, low-mass stars provide useful insights into stellar evolution: if you want to understand why heavier mass stars evolve in a certain way —e.g. develop into red giants— it is helpful to take a careful look at why the lowest mass stars do not.

Todays paper was published in 1997, and marked the first time when the evolution and long-term fate of the lowest mass stars were calculated. It still gives a great overview of their lifespans, which we look at in this astrobite.

Stellar evolution: The life of a 0.1M star

The authors use numerical methods to evolve the lowest mass stars. The chart below summarizes the lifespan of a 0.1M star on the Hertzsprung-Russell diagram, which plots a star’s luminosity as a function of effective temperature. The diagram is the star’s Facebook wall; it gives insight into events in the star’s life. Let’s dive in and follow the star’s lifespan, starting from the beginning.

The star starts out as a protostar, a condensing molecular cloud that descends down the Hayashi track. As the protostar condenses it releases gravitational energy, it gets hotter, and pressures inside it increase. After about 2 billion years of contraction, hydrogen fusion starts in the core. We have reached the Zero Age Main Sequence (ZAMS), where the star will spend most of its life, fusing hydrogen to helium.

Figure 2: The life a 0.1M star shown on the Hertzsprung-Russell diagram, where temperature increases to the left. Interesting life-events labelled. Figure 1 from the paper, with an annotated arrow.

 

The fusion process creates two isotopes of helium: 3He, an intermediate product, and 4He, the end product. The inset chart plots the core composition of H, 3He, and 4He. We see that for the first trillion (note trillion) years hydrogen drops, while 4He increases. 3He reaches a maximum, and then tapers off. As the star’s average molecular weight increases, the star grows hotter and more luminous. It moves to the upper left on the diagram. The star has now been evolving for roughly 5.7 trillion years, slowly turning into a hot helium dwarf.

The red arrow on the diagram marks a critical juncture in the star’s life. Before now, the energy created by fusion has been transported by convection, which heats up the stellar material, causing it to move and mix with other colder parts of the star, much in a same way how a conventional radiator heats your room. This has kept the star well mixed, and maintained a homogeneous chemical composition throughout the star. Now, the physics behind the energy transport changes. The increasing amounts of helium lower the opacity of the star, a measure of radiation impenetrability. Lowering the opacity makes it easier for photons to travel larger distances inside the star, making them more effective than convection at transporting energy. We say that the stellar core becomes radiative. This causes the entire star to contract and produces a sudden decline in luminosity (see red arrow).

red_dwarf

Figure 3: The interior of a 0.1M star. The red arrow in Figure 2 marks the point where the star’s core changes from being convective to radiative. Figure from here.

Now the evolutionary timescale accelerates. The core, now pure helium, continues to increase in mass as hydrogen is exhausted in a nuclear shell around it. On the Hertzsprung-Russell diagram the star moves rapidly to higher temperatures, and will eventually grow hotter than the current Sun, but only 1% as bright.  Afterwards, the star turns a corner. The star starts to cool off, the shell source is slowly extinguished, and the luminosity decreases. The star is on the cooling curve, moving towards Florida on the Hertzsprung-Russel diagram, on its way to become a low-mass helium white dwarf.

The total nuclear burning lifetime of the star is somewhat more than 6 trillion years, and during that time the star used up 99% of its initial hydrogen; the Sun will only burn about 10%. Incredible efficiency.

The lifespans of 0.06M – 0.20M stars

Additionally, the authors compare the lifespans of stars with masses similar to the 0.1M star. Their results are shown in Figure 4. The lightest object, a 0.06M star, never starts fusing. Instead, it rapidly cools, and fades away as a brown dwarf. Stars with masses between 0.08M and 0.16M have similar lives to the star in Figure 2. All of them travel increasingly to the left on the Hertzsprung-Russell diagram after developing a radiative core. The radiative cores appear at progressively earlier times in the evolution as the masses increase. Stars in the mass range 0.16M-0.20M behave differently, and the authors mark them as an important transition group. These stars have a growing ability to swell, compared to the lighter stars. This property is what ultimately fuels even higher mass stars to become red giants.

laughlin_hr_diagram2

Figure 4: The evolution of stars with masses between 0.06M and 0.25M shown on a Hertzsprung-Russell diagram. The inset chart shows that stellar lifetimes drop with increasing mass. Figure 2 from the paper.

Implications

Fusing hydrogen slow and steady wins the stellar age-race. We see that the lowest mass stars can reach ages that greatly exceed the current age of the universe — by a whooping factor of 100-1000! These stars are both the longest lived, and also the most numerous in the galaxy and the universe. Most of the stellar evolution that will occur is yet to come.


What Color Does the Internet Think Pluto Is? by The Planetary Society

Astronomers have known for a long time that Pluto’s surface is reddish, so where did the common idea that Pluto is blue come from?


April 20, 2015

Inverted realities by Charlie Stross

OK, here's an idle thought (and a question) for you ...

A couple of weeks ago at the British Eastercon I found myself on a panel discussion about vampires. (Hey, I've been trying to get the hell away from being Mr Singularity Guy for years now; what's your problem?)

Anyway, there I was sitting with Freda Warrington and Jim Butcher, and our moderator opens up by asking, "what makes vampires sexy?"

And I suddenly realized I had come to the right place for an argument. Because ...

Vampires are not sexy. At least, not in the real world.

Desmodus rotundis isn't sexy. (Except insofar as small furry rodents that carry rabies aren't as un-sexy as some other obligate haemophages.) Bed bugs are really not sexy. But if you want maximally not-sexy, it's hard to top Placobdelloides jaegerskioeldi, the Hippo Arse Leech.

The Hippo Arse Leech is a leech; it sucks blood. Like most leeches, its mouth parts aren't really up to drilling through the armour-tough skin of a hippopotamus, so it seeks out an exposed surface with a much more porous barrier separating it from the juicy red stuff: the lining of the hippo rectum. When arse leeches find somewhere to feed, in due course happy fun times ensue—for hermaphrodite values of happy fun times that involve traumatic insemination. Once pregnant, the leeches allow themselves to be expelled by the hippo (it's noteworthy that hippopotami spin their tails when they defecate, to sling the crap as far away as possible—possibly because the leeches itch—we're into self-propelled-hemorrhoids-with-teeth territory here), whereupon in the due fullness of time they find another hippo, force their way through it's arse crack, and find somewhere to chow down. Oh, did I mention that this delightful critter nurtures its young? Yep, the mother feeds her brood until they're mature enough to find a hippo of their own. (Guess what she feeds them with.)

Here 's a video by Mark Siddall, professor of invertebrate zoology at the American Natural History Museum, a noted expert on leeches, describing how he discovered P. Jaegerskioeldi, just in case you think I'm making this up.

By the end of my description Jim and Freda were both ... well, I wish I'd thought to photograph their faces for posterity. So were the audience. And that's when I got to the money shot: the thing about fictional vampires is, vampires are only sexy when they're anthropomorphic.

Let's leave aside the whole living dead angle (a callback to ancient burial traditions in northern climes, where the decay of corpses might be retarded by cold weather: and when a family sickened and died one after the other, from contagious diseases such as tuberculosis, on opening the family crypt an undecaying rosy-cheeked corpse might be found with blood trickling from its mouth). Let's look solely at the vampire motif in modern fiction, where sexy vampires are used as a metaphor for the forbidden lover. Do we see anything approximating a realistic portrayal of actual blood-drinking organisms? Do we hell! Blood isn't actually very nutritious, so haemophagous parasites tend to be small, specialized, and horrifyingly adapted: biological syringes with a guidance system and a digestive tract attached. If we expanded a real one to human size it'd be a thing of horror, fit to give Ridley Scott or H. R. Giger nightmares. But I digress: the thing is, we know what real bloodsucking fiends look like, and do we find them in our fiction? We do not.

So here we have a seeming paradox: a class of organism that is represented in fictionalized, supernatural form in a manner that is pretty much the antithesis of their real world presentation. There's an entire sub-genre in which we are expected to temporarily pretend that the smouldering sexy vampire lover isn't actually a hippo arse leech squirming and eager to dig it's jaws into your rectal mucosa. And now I am shaking my head and wondering, thoughtfully, if I can see any other parasitic life-cycles that are amenable to converting into supernatural fictional tropes? (Your first example being, of course, my use of angler fish sex as a model for unicorns ...)

PS: If you are a creationist, the onus is on you to come to terms with why your God saw fit to inflict a parasite like this on hippopotami. Just sayin'.


A Labour / Conservative Coalition by Simon Wardley

For over 20 of the last 100 years, we've had a coalition Government formed by the two major parties. Given the current economic climate, I hold a view that a strong Government created by a coalition between Labour and Conservatives is desirable. 

But could it work? 

"No!" is the cry most often heard. So this weekend I went through dissecting both manifestos and the structure of both parties. I'm not convinced that such a coalition couldn't work in terms of the structure of MPs & Ministers and the commitments made in the manifestos. I put together an ideal cabinet and list of senior ministers and worked on a joint manifesto that could be "negotiated" and just about funded (based on the limited information I have and lots of assumptions I made). 

To me, it makes sense but then so does a Labour / Conservative coalition.

So anyway before the guffaws start, here is ...

Wardley's Naive Manifesto of Working Togetherness for the Common Interest




... oh and before you tell me this could never happen, I'm fully aware that it would require both parties putting aside political interests and working for the common interest. I already know how unlikely that is. That doesn't mean that I can't dream.

I'll put up the table with the composition of such a Government tomorrow. 


The Party Manifestos 2015: SNP. by Feeling Listless

Politics And finally ... I've decided just to cover the parties featured in the original leader's debate on ITV because otherwise I'd be here until election day.

We end with the SNP.

The BBC.

We believe that responsibility for broadcasting in Scotland should transfer from Westminster to the Scottish Parliament and we will support moves to more devolved arrangements for the BBC with greater powers and funding for the different national and regional broadcasting areas, such as BBC Scotland.

We believe that the licence fee should be retained with any replacement system, which should be based primarily on the ability to pay, in place by the end of the next BBC Charter period.

BBC Scotland should receive a fairer share of BBC income, reflecting more accurately the licence fee revenue raised here in Scotland. This would provide a boost of over £100 million, which we believe will provide important new opportunities for production companies and the creative sector in Scotland.

The Scottish Government and Parliament should have a substantial role at all stages in the review of the BBC Charter and we will work to ensure that any new governance arrangements for the BBC better reflect Scotland’s interests.
We believe the licence fee should be retained ... but ... this is about what's expected. There is a problem in general in terms of national coverage and how programmes are made although it's also notable (and I've said this before) that England doesn't have an equivalent of BBC Alba or an "England" genre on the iPlayer either.

Global Emissions.
As the Scottish Government, we are consulting on measures to reduce emissions in Scotland, including looking at the creation of Low Emission Zones. We will continue to develop our zero waste strategy, supporting a range of initiatives, for example the ongoing pilot project for reverse vending machines to encourage rewards for recycling.

We will use our influence at Westminster to ensure the UK matches, and supports, Scotland’s ambitious commitments to carbon reduction and that we play a positive role in the UN Climate Change conference in Paris. We will also look for the Bank of England to continue its work on the potential impact of climate change on financial stability in the UK and report on how it can best respond.
Which is all fine, as is backing of renewables, except there's also a fair amount of backing for the oil and gas industries. Generally the policies here align with Labour and the LibDems because of those inconsistencies. They do support a moratorium on fracking.

Libraries

No specific mention of libraries at least based on a text search.  Note this is a document without a contents page (or index as per the LibDems).

Film Industry
We support the creation of a Creative Content Fund for the games industry to encourage the formation of new studios and also back the retention of the Video Games tax relief. We back industry calls for an increase in the SEIS investment limit and changes to the Shortage Occupation List to recognise specific skills needs in the sector.
Nothing about the film industry exactly. The creative section is the BBC paragraphs above and this about the games industry.

Gender Equality
We have also called for early action on Equal Pay audits for larger companies to ensure women are getting the salaries they are entitled to. We will demand that section 78 of the Equalities Act 2010 is commenced and that regulations to compel employers of more than 250 people to publish annual gender pay gap information, starting in 2016-17, are consulted on and brought into law.

With powers over equalities devolved, we would bring forward an Equal Pay (Scotland) Bill to finally deliver equal pay law that works for women in Scotland. It is unacceptable - 45 years after the Equal Pay Act was passed in 1970 - that the gender pay gap remains. This would include consultation on how new regulations or structures can be created by the Bill to expedite the equal pay claims process, and ensure that settlements are enforced quickly.
Aha, so that's where the 250 figure from the Labour and LibDem manifestos is from. Again, I don't understand why it isn't 200 or 150 or some other arbitrary figure (even having had a glance about online). There are some other useful policies in this are including, "50:50 representation on public and private boards" and the abolition of VAT on sanitary towels.

Anyway, here's a direct link to the manifesto:

http://votesnp.com/docs/manifesto.pdf

I can't vote for them.


Talks Collection: James Shapiro. by Feeling Listless

Literature James S. Shapiro is Professor of English and Comparative Literature at Columbia University who specialises in Shakespeare and the Early Modern period and is on the front line of scholars defending Shakespeare's authorship. Here is a partial bibliography.

Shakespeare and the Jews (1996)

For Columbia University's Theatre Talk:




1599 : A Year in the Life of William Shakespeare (2005)

For Columbia University's Theatre Talk:




Contested Will: Who Wrote Shakespeare? (2010)

For Simon & Schuster:



For Blackwells Podcast:




For The Aspen Institute:



For WNYC:



For Columbia University's Theatre Talk:



For The Old Globe in San Diego:



For Shakespeare Brasil:



For Ohio State:



For Daves Gone By:




Shakespeare in America: An Anthology from the Revolution until Now, ed. James Shapiro, with a foreword by Bill Clinton. (2014)

For the Baker-Nord Center for English at Case Western Reserve University:



For 92Ynd Street Y:



For Columbia University:



For The Library of America:






Misc:

On the Sam Wannamaker Playhouse:



On Macbeth:



On Richard III:




On Andrea Chapin's novel The Tutor:


Brand archaeology by Richard Pope

TfL bike key with quarter of red Santander logo removed to reveal blue roundel

TfL bike key with half of red Santander logo removed to reveal blue roundel and part of Barclays logo

Reverse of a TfL bike key with plastic pealed back to reveal a Barclay']s URL


April 18, 2015

Cornerhouse Memories. by Feeling Listless

Film Chris Payne of Northern Soul has canvassed staff members at Cornerhouse for their memories of working their and looking forward to the new venue HOME. Here's the chief projectionist on meeting Richard Attenborough:

I met Dickie on two occasions. Each time was a BAFTA event, as he was chairman at the time. Whenever he had a film out he would screen it for members and give a little talk afterwards. In 1985 he presented A Chorus Line, then in 1987 Cry Freedom. I was in projection room 2/3 and he came in and chatted a bit.

I can’t remember exactly what we said, but he was interested in the projectors and everything. He also came in during the interval for Cry Freedom. Perhaps we should have left out the intermission. As everybody got up to leave he said something like ‘Oh dear, I do hope they’ll come back.’ We waited a while and gradually they filed back in. I remember he had his arm around my shoulders in true ‘luvvie‘ fashion.
As I said a couple of weeks ago, I'm really going to miss the place.


The Milky Way’s Alien Disk and Quiet Past by Astrobites

Title: The Gaia-ESO Survey: A Quiescent Milky Way with no Significant Dark/Stellar Accreted Disk
Authors: G. R. Ruchti, J. I. Read, S. Feltzing, A. M. Serenelli, P. McMillan, K. Lind, T. Bensby, M. Bergemann, M. Asplund, A. Vallenari, E. Flaccomio, E. Pancino, A. J. Korn, A. Recio-Blanco, A. Bayo, G. Carraro, M. T. Costado, F. Damiani, U. Heiter, A. Hourihane, P. Jofre, G. Kordopatis, C. Lardo, P. de Laverny, L. Monaco, L. Morbidelli, L. Sbordone, C. C. Worley, S. Zaggia
First Author’s Institution: Lund Observatory, Department of Astronomy and Theoretical Physics, Lund, Sweden
Status: Accepted for publication in MNRAS

 

 

Galaxy-galaxy collisions can be quite spectacular. The most spectacular occur among galaxies of similar mass, where each galaxy’s competing gravitational forces and comparable reserves of star-forming gas are strong and vast enough to contort the other into bright rings, triply-armed leviathans, long-tailed mice, and cosmic tadpoles. Such collisions, as well as their tamer counterparts between galaxies with large differences in mass—perhaps better described as an accretion event rather than a collision—comprise the inescapable growing pains for adolescent galaxies destined to become the large galaxies adored by generations of space enthusiasts, a privileged group of galaxies to which our home galaxy, the Milky Way, belongs.

What’s happened to the hapless galaxies thus consumed by the Milky Way?  The less massive among these unfortunate interlopers take a while to fall irreversibly deep into the Milky Way’s gravitational clasp, and thus dally, largely unscathed, in the Milky Way’s stellar halo during their long but inevitable journey in.  More massive galaxies feel the gravitationally tug of the Milky Way more strongly, shortening the time it takes the interloper to orbit and eventually merge with the Milky Way as well as making them more vulnerable to being gravitationally ripped apart.  But this is not the only gruesome process the interlopers undergo as they speed towards their deaths.  Galaxies whose orbits cause them to approach the dense disk of the Milky Way are forced to plow through the increasing amounts of gas, dust, stars, and dark matter they encounter.  The disk produces a drag-like force that slows the galaxy down—and the more massive and/or dense the galaxy, the more it’s slowed as it passes through.  Not only so, the disk gradually strips the unfortunate galaxy of the microcosm of stars, gas, and dark matter it nurtured within.  The most massive galaxies—those at least a tenth of the mass of the Milky Way, the instigators of major mergers—accreted by the Milky Way are therefore dragged towards the disk and are forced to deposit their stars, gas, and dark matter preferentially in the disk every time their orbits brings them through the disk.  The stars deposited in the disk in such a manner are called “accreted disk stars,” and the dark matter deposited forms a “dark disk.”

The assimilated stars are thought to compose only a small fraction of the stars in the Milky Way disk. However, they carry the distinct markings of the foreign worlds in which they originated.  The accreted galaxies, lower in mass than the Milky Way, are typically less efficient at forming stars, and thus contain fewer metals and alpha elements produced by supernovae, winds of some old red stars, and other enrichment processes instigated by stars.  Some stars born in the Milky Way, however, are also low in metals and alpha elements (either holdovers formed in the early, less metal- and alpha element-rich days of the Milky Way’s adolescence or formed in regions where gas was not readily available to form stars).  There is one key difference between native and alien stars that provide the final means to identify which of the low metallicity, low alpha-enriched stars were accreted: stars native to the Milky Way typically form in the disk and thus have nearly circular orbits that lie within the disk, while the orbits of accreted stars are more randomly oriented and/or more elliptical (see Figure 1).  Thus, armed with the metallicity, alpha abundance, and kinematics of a sample of stars in the Milky Way, one could potentially pick out the stars among us that have likely fallen from a foreign world.

 

A search for the accreted disk allows us to peer into the Milky Way’s past and provides clues as to the existence of a dark disk—a quest the authors of today’s paper set out to do.  Their forensic tool of choice?  The Gaia-ESO survey, an ambitious ground-based spectroscopic survey to complement Gaia, a space-based mission designed to measure the position and motions of an astounding 1 billion stars with high precision, from which a 3D map of our galaxy can be constructed and our galaxy’s history untangled.  The authors derived metallicities, alpha abundances, and the kinematics of about 7,700 stars from the survey.  Previous work by the authors informed them that the most promising accretion disk candidates would have metallicities no more than about 60% that of the Sun, an alpha abundance less than double that of the Sun, and orbits that are sufficiently non-elliptical and/or out of the plane of the disk.  The authors found about 4,700 of them, confirming the existence of an accreted stellar disk in the Milky Way.

Were any of these stars deposited in spectacular mergers with high-mass galaxies?  It turns out that one can predict the mass of a dwarf galaxy by its average metallicity.  The authors estimated two bounds on the masses of the accreted galaxies: one by assuming that all the stars matching their accreted disk stars criteria were bona fide accreted stars, and the other by throwing out stars that might belong to the disk—those with metallicites greater than 15% of the Sun’s.  The average metallicity of the first subset of accreted stars was about 10 times less than the Sun’s, implying that they came from galaxies with a stellar mass of 10^8.2 solar masses.  Throwing out possible disk stars lowered the average metallicity to about 5% of the Sun’s, implying that they originated in galaxies with a stellar mass of 10^7.4.  In comparison, the Milky Way’s stellar halo is about 10^10 solar masses.  Thus it appears that the Milky Way has, unusually, suffered no recent major mergers, at least since it formed its disk about 9 billion years ago.  This agrees with many studies that have used alternative methods to probe the formation/accretion history of the Milky Way.

The lack of major mergers also implies that the Milky Way likely does not have a disk of dark matter.  This is an important finding for those searching for dark matter signals in the Milky Way, and one which implies that the Milky Way’s dark matter halo is oblate (flattened at the poles) if there is more dark matter than we’ve estimated based on simplistic models that assumed the halos to be perfectly spherical.

 

Figure 1.  The interlopers.

Figure 1. Evidence of a foreign population of stars.  The Milky Way’s major mergers (in which the Milky Way accretes a smaller galaxy with mass greater than a tenth of the Milky Way’s) can deposit stars in our galaxy’s disk.  These plots demonstrate one method to determine which stars may have originated in such a merger: how far from an in-plane circular orbit a star has, as is described by the Jz/Jc parameter.  Stars born in the disk (or “in-situ”) typically have circular orbits that lie in the disk plane—these have Jz/Jc close to one, whereas those that were accreted have lower Jz/Jc.  The plots above were computed for a major merger like that between the Milky Way and its dwarf companion the Large Magellanic Cloud, which has about a tenth the mass of the Milky Way.  If the dwarf galaxy initially has a highly inclined orbit (from left to right, 20, 40, and 60 degree inclinations), then the Jz/Jc of stars deposited in the disk by the galaxy becomes increasingly distinct.

 

Cover image: The Milky Way, LMC, SMC from Cerro Paranal in the Atacama Desert, Chile. [ESO / Y. Beletsky]

 


JPL Will Present their Mars Program Concept at the 2015 Humans to Mars Summit by The Planetary Society

JPL will present their humans to Mars program concept at the Humans to Mars Summit and publish it as a peer-reviewed article in the New Space Journal.


The Cosmic Microwave Oven Background by The Planetary Society

Over the past couple of decades the Parkes Radio Telescope in Australia has been picking up two types of mysterious signals, each lasting just a few milliseconds. The source of one of these signals may have finally been found—and an unexpected source at that.


Pretty pictures of the Cosmos: Life and death in the Universe by The Planetary Society

Astrophotographer Adam Block brings us images showcasing the evolutionary cycles in our universe.


April 17, 2015

Laurie Penny on Voting. by Feeling Listless

Politics In the New Statesmen. This paragraph pretty much sums up the situation:

"What are we supposed to do with this rotating cast of political disappointments, this hydra with a hundred arseholes? How do we express our disgust for this antique shell of a democracy? I wish, more than anything, that there was a simple answer. The truth is far more complex and infinitely sadder: whatever the outcome of this election, there is a battle ahead for anyone who believes in social justice. The truth right now is that there is only one choice you get, and that’s the face of your enemy. The candidates aren’t all the same but they look similar enough if you squint: a narrow palette of inertia and entitlement. We made the mistake of thinking they were all the same in 2010, that the Tories could not possibly be worse than New Labour. Turns out we were wrong. The question on the table isn’t whether we’ll ever get the government we deserve. The question is whether we want the next five years to be disastrous or merely depressing. The choice is between different shades of disillusion."


Big Mac in London by Goatchurch

A once in a lifetime experience required me to celebrate with a full-on McDonald’s burger.
bigmac
It was horrible.

Time to go home.


Etsy Goes Public by Albert Wenger

I just woke up in Tokyo where Susan and I are visiting with our children (both as a vacation and as part of home schooling the kids). In the meantime the day is ending in the US but I still want to take the time to congratulate Chad Dickerson, Kristina Salen and the rest of the team at Etsy on the company’s successful IPO. As a New Yorker I am thrilled to be having a company of Etsy’s scale and importance here.

It is also a good time to remember that many people have contributed to the success of Etsy over the years. As an early investor I want to especially thank Rob Kalin, Chris Maguire, Haim Schoppik, Jared Tarbell and Matt Stinchcomb for getting Etsy off the ground. There were lots of ups and downs along the way but without them Etsy wouldn’t be here today.  Thanks!


New views of three worlds: Ceres, Pluto, and Charon by The Planetary Society

New Horizons took its first color photo of Pluto and Charon, while Dawn obtained a 20-frame animation looking down on the north pole of a crescent Ceres.


April 16, 2015

Soup Safari #21: Sweet Potato, Aubergine and Marissa at LEAF. by Feeling Listless







Dinner. £3.95. Leaf, 65-67 Bold Street, Liverpool L1 4EZ. Phone:0151 707 7747. Website.


Why You Must Register To Vote. by Feeling Listless

Politics Voting registration closes on the 20th April so I thought it was about time I posted again this letter which I originally wrote ten years ago and even if some of the details have changed (and how) it is still incredibly relevant. Register. Do it, do it now. Here.

Dear Disaffected Voter,

There was a survey today with said that only one in three young people will be making the effort to vote on Thursday. The turnout is generally going to be about 60%. My own consistency, Riverside, had the lowest turnout in the whole country. There are many millions of people in the land who just don't see the point in voting.

There'll be some of you who won't be voting because for some reason you simply can't. You recently moved house and didn't have enough to time to get your vote moved to your new house. You'll be on holiday and the whole postal voting thing couldn't be scheduled properly with while you're away. Those and a whole raft of perfectly good reasons. I'm not talking to you.

I'm talking to the rest. You'll be split into two camps. Those who can't be bothered and those who don't see the point. Yes, you. You idiot.

If you're insulted by that, you should be.

The biggest idiots are the ones who can't be bothered. The ones who have the facility to vote, aren't impeded, but simply can't be arsed walking all the way to the polling station, even though there are enough of them that the local will be in the next street. Do you realise you're screwing things up for the rest of us? Here is a list of the knock on effects of you not showing up.

(1) It makes us all look bad. There are certain parts of the world were people don't have the choice of more than one party, for that matter the ability to vote at all. Not naming any names. In some of the these places people have been killed whilst they've fought to get the chance to choose who they want as a leader. By noting voting yourself, you're pissing on their fight because you're devaluing what they're fighting for. You're like Cameron's dad in Ferris Bueller's Day Off. Lovely car parked up in the garage being wasted. Take it out for a spin once in a while.

(2) It's not a fair contest. I was watching the Olympics last year, and in one of the races a rank outsider won a gold medal. But he was seriously pissed off -- because the great runners in the sport hadn't been there to contest their title so it was sort of a default win. By not showing your support for a party, whoever wins won't necessarily have won because the country wants them to be there. It'll be because the majority of 60% of the country wants them there. Which isn't the same thing.

(3) It makes you look bad. If you can't be bothered spending twenty minutes of the day going into a room in a school somewhere to put a cross on a slip of paper, a process which has been made as easy as possible now (now that they even print the name of the party on the ballot paper) what frankly are you good for?

Now there are the rest of you who are making a point of not voting. My Father believes that everyone should be forced to vote by law, even if they show up and spoil their ballot paper. Within the current system it's your choice and right not to vote. So there will be a percentage of people who don't vote because they believe it's sending a message that you're unhappy with the political process in this country. There are a couple of flaws to this plan:

(1) Politicians won't give a shit about you. Because you didn't turn up at a polling station, come the day they don't even know you exist. If you don't like the political process the only way to develop it is to engage with politicians and ask for that change. Some of the parties have ideas for reform using systems such a proportional representation which means that every vote is counted.

(2) Your plan only works if no one votes. Like that's going to happen. No matter what you do, someone will be Prime Minister on Friday.

There are some, such as the 66% of students I mentioned earlier, who aren't voting because they say that the manifestos and party policies aren't offering anything to them. What doesn't occur to you is that manifestos are written to interest the various demographics of voters. So if you don't turn up, you're not a voter so why should they try and attract you with tailored policies? So effectively if enough of you people turned up and voted, it'd frighten the shit out of the politicians and they'd have to start listening and developing useful policies so that they can keep you on their side. There were no policies effecting women in manifestos until women got the vote. It's pretty much the same thing. You turn up, so will they.

I know this has been a bit freewheeling. If I'd wanted to I could have found a bunch of statistics and anecdotal evidence to back up some of these things. But I thought I'd go for the simple, direct approach because don't think I've said anything which you don't already know.

I'm just trying to give you a nudge.

Even if you turn up and vote for a man dressed as a banana you'll at least have the satisfaction of knowing when the announcements are made, someone who just wanted to have a bit of fun hasn't lost their deposit.

Just don't waste you vote. Pick a party and go.

And if the one you pick doesn't win, there's always next time....

Stu.


Females are strong as hell. by Feeling Listless



Film Here's the trailer for Suffragette which foregrounds Carey Mulligan and Anne-Marie Duff whilst giving Meryl the final line. "Never give up the fight." Yes indeed. Although the film isn't out until the day before my birthday, the release of the trailer now is to coincide with the election, as indicated by the closing hashtag #votingmatters. Yes, yes it does. But the slug line's also a perfect piece of marketing "Recruiting 2015", because as the recent Amanda Vickery documentary demonstrated (and the political party manifestos), the fight goes on.


C-3PO, PhD: Machine Learning in Astronomy by Astrobites

The problem of Big Data in Astronomy

Astronomers work with a lot of data. A serious ton of data. And the rate at which telescopes and simulations pour out data is increasing rapidly. Take the upcoming Large Synoptic Survey Telescope, or LSST. Each image taken by the telescope will be several GBs in size. Between the 2000 nightly images, processing over 10 million sources in each image, and then sending up to 100,000 transient alerts, the survey will result in more than 10 Terabytes of data… EVERY NIGHT! Throughout the entire project, more than 60 Petabytes of data will be generated. At one GB per episode, it would take 6 million seasons of Game of Thrones to amount to that much data. That’s a lot of science coming out of LSST.

One of the largest problems with handling such large amounts of astronomical data is how to efficiently search for transient objects: things that appear and disappear. A major challenge of transient astronomy is how to distinguish something that truly became brighter (like a supernova) from a mechanical artifact (such as a faulty pixel, cosmic ray, or other defect). With so many images coming in, you can’t retake them every time to check if a source is real.

Before the era of Big Data, astronomers could often check images by eye. With this process, known as “scanning”, a trained scientist could often distinguish between a real source and an artifact. As more images have come streaming in, citizen science projects have arisen to harness the power of eager members of the public in categorizing data. But with LSST on the horizon, astronomers are in desperate need of better methods for classifying images.

Bring in Machine Learning

Fig. 1 – A visual example of a machine learning classification problem. Here, trees are sorted by two features: leaf size and number of leaves per twig. The training set (open points) have known classifications (Species 1, 2, or 3). Once the training set has been processed, the algorithm can generate classification rules (the dashed lines). Then, new trees (filled points) can be classified based on their features. Image adapted from http://nanotechweb.org/cws/article/lab/46619

Today’s paper makes use of a computational technique known as machine learning to solve this problem. Specifically, they use a technique known as “supervised machine learning classification“. The goal of this method is to derive a classification of an object (here, an artifact or a real source) based on particular features that can be quantified about the object. The method is “supervised” because it requires a training set: a series of objects and their features along with known classifications. The training set is used to teach the algorithm how to classify objects. Rather than having a scientist elaborate rules that define a classification, this technique develops these rules as it learns. After this training set is processed, the algorithm can classify new objects based on their features (see Fig. 1).

To better understand supervised machine learning, imagine you are trying to identify species of trees. A knowledgeable friend tells you to study the color of the bark, the shape of the leaves, and the number of leaves on a twig — these are the features you’ll use to classify. Your friend shows you many trees, and tells you their species name (this is your training set), and you learn to identify each species based on their features. With a large enough training set, you should be able to classify the next tree you come to, without needing a classification from your friend. You are now ready to apply a “supervised learning” method to new data!

Using machine learning to improve transient searches

Fig. 2 – The performance of the autoScan algorithm. The false detection rate is how often an artifact is labeled a true source. The missed detection rate (or false-negative rate) is how often real sources are labeled as artifacts. For a given tolerance level (tau), the authors can select how willing they are to accept false positives in exchange for lower risk of missing true sources. The authors adopted a tolerance of 0.5 for their final algorithm. This level correctly identifies real sources 96% of the time, with only a 2.5% rate of false positives. Fig. 7 from Goldstein et al. 2015.

The authors of today’s paper developed a machine learning algorithm called autoScan, which classifies possible transient objects as artifacts or real sources. They apply this technique to imaging data from the Dark Energy Survey , or DES. The DES Supernova program is designed to measure the acceleration of the universe by imaging over 3000 supernovae and obtaining spectra for each. Housed in the Chilean Andes mountains, DES will be somewhat akin to a practice run for LSST, in terms of data output.

The autoScan algorithm uses a long list of features (such as the flux of the object and its shape) and a training set of almost 900,000 sources and artifacts. After this training set was processed, the authors tested the algorithm’s classification abilities against another validation set: more objects with known classifications that were not used in the training set. AutoScan was able to correctly identify real sources in the validation set 96% of the time, with a false detection (claiming an artifact to be a source) rate of only 2.5% (see Fig. 2).

With autoScan, the authors are prepared to analyze new data coming live from the Dark Energy Survey. They can greatly improve the efficiency of detecting transient sources like supernova, by easily distinguishing them from instrumental artifacts. But better techniques, such as more clever development of training sets, will continue to beat down the rate of false positives.

Machine learning algorithms will become critical to the success of future large surveys like LSST, where person-power alone will be entirely insufficient to manage the incoming data. The same can be said for Gaia, TESS, the Zwicky Transient Facility, and pretty much any other upcoming astronomical survey.  Citizen science projects will still have many practical uses, and the trained eye of a professional astronomer will always be essential. But in the age of Big Astro, computers will continue to become more and more integral parts of managing the daily operations of research and discovery.


Shoemaker NEO Grant Winners Announced: Saving the World by The Planetary Society

The six winners of the 2015 Shoemaker NEO Grants will use the grants to upgrade their observatories to improve their abilities to study potentially dangerous asteroids.


April 15, 2015

Extraneous Text. by Feeling Listless

Politics Unlike a lot of parties, the Liberal Democrats are at least committed to making their manifesto available to as many people as possible to the extent that they've released an audio version, which I downloaded out of curiosity.

I haven't listened to the whole thing but it seems to be machine generated utilising an electronic, if pretty convincing approximation of a female voice intoning in the Queen's English. She sounds a bit Anneke Wills with a much deeper voice.

The tracks also sound like they've been knocked together by machine. The first one simply says "Liberal Democrat Manifesto 2015". Track two says "title page". Track three repeats, ""Liberal Democrat Manifesto 2015" before going on with "Stronger Economy. Fairer Society. Opportunity for Everyone."

Then track four begins with: "Extraneous text" before continuing into all of the key policy areas outlined on the cover which clearly suggests no one's listened to this or done some editing before it was put up on the website.

Except, after reading out the policy areas as they appear on the cover, "Extraneous text" we're told again before Ananova's successor heads off into reading the note from the back cover about devolved issues. Which either means it has been edited or it isn't reading the text exactly as it appears on the pdf but from some other version which has lots of "extraneous text".

The rest of the audio seems to follow the manifesto as is starting with Nick Clegg's letter then ends with the text from the back about alternative formats as the machine voice falls over as it attempts to read the web address as these things so often do.

But yes, "extraneous text". Let's see how true that is come the coalition negotiations.


The Party Manifestos 2015: UKIP. by Feeling Listless

Politics Oh god.  Here we go...

The BBC.

Currently, British intelligence is fragmented between a number of agencies, including MI5, MI6, GCHQ and BBC Monitoring. All have different funding streams and report to different government departments. This generates a significant overlap in work and resources and risks exposing gaps in the system.

UKIP will create a new over-arching role of Director of National Intelligence (subject to confirmation hearing by the relevant Commons Select Committee), who will be charged with reviewing UK intelligence and security, in order to ensure threats are identified, monitored and dealt with by the swiftest, most appropriate and legal means available. He or she will be responsible for bringing all intelligence services together; developing cyber security measures; cutting down on waste and encouraging information and resource sharing.
The BBC's only mention. BBC Monitoring becomes part of an Orwellian restructuring of the intelligence service. Scary.

Updated 22/04/2015  Someone's bothered to ask their leader about it.  “I would like to see the BBC cut back to the bone to be purely a public service broadcaster with an international reach, and I would have thought you could do that with a licence fee that was about a third of what it currently is.”

Global Emissions
While our major global competitors - the USA, China, India - are switching to low-cost fossil fuels, we are forced to close perfectly good coal-fired power stations to meet unattainable targets for renewable capacity. If we carry on like this, the lights are likely to go out.
Pretty much as you'd expect. Investment in fracking, investment in coal and the withdrawal of investment and subsidies in renewables apart from hydro (weirdly) and where contracts have already been signed. Withdrawal from the Climate Change Act. They essentially seem to think they know better than 97% of the world's climate change scientists.

Libraries
Local authorities have significant power in matters concerning planning and housing, education, local refuse and recycling facilities, parks and leisure facilities, transport, libraries and keeping local people safe.
Yes they do. And?

Film Industry

Nothing.

Gender Equality
To increase the uptake of science learning at secondary level, we will follow the recommendations of the Campaign for Science and Engineering and require every primary school to nominate (and train, if necessary) a science leader to inspire and equip the next generation. This role will also help to address the gender imbalance in the scientific subjects.
Nothing on equal pay though.

Here's a direct link to the manifesto:

http://ukip-serv.org/theukipmanifesto2015.pdf

I wouldn't vote for them either.


The Party Manifestos 2015: Liberal Democrats. by Feeling Listless

Politics The Lib Dem manifesto cover somehow manages to encompass versions of the colours from all the other main parties apart from the UKIPs. Not sure what to make of that.

The BBC

Protect the independence of the BBC while ensuring the Licence Fee does not rise faster than inflation, maintain Channel4 in public ownership and protect the funding and editorial independence of Welsh language broadcasters.

To promote the independence of the media from political influence we will remove Ministers from any role in appointments to the BBC Trust or the Board of Ofcom.

Maintain funding to BBC World Service, BBC Monitoring and the British Council.
Pretty similar to the Labour manifesto though the BBC would still be fucked financially here even if the mention of the licence fee at least confirms the LibDems still believe in the licence fee. Saying you'll maintain funding to those things doesn't indicate where that funding would be from, top slicing the licence fees or central taxation as it used to be.

Global Emissions
Pass a Zero Carbon Britain Act to set a new legally binding target to bring net greenhouse gas emissions to zero by 2050.
As the cover indicates the environment runs right through the manifesto and is mentioned in relation to most areas, in transport policy for example with the replacement of older buses with "low emission ones". There are also two pages outlining "five green laws" covering such things as recycling targets and promoting electric cars. Sadly all of that is underdone somewhat by essentially promoting fracking, still, even if they want to hand completed wells over to geothermal heat developers for renewable purposes afterwards. But it doesn't really explain how you can have zero greenhouse gas emissions by 2050 with thirty-five years if we're still doing it now. They're banning fracking in national parks though. So ¯\_(ツ)_/¯

Libraries
Citizens expect a good service from their public services, and rightly so. While many schools, hospitals, libraries and other public institutions offer world-class standards, we could do so much better: integrating services and making them more accessible, as well as improving the response when things go wrong.

Complete broadband rollout to every home, and create an innovation fund to help keep local GPs, post offices and libraries open.

Develop the Community Budgets model for use in rural areas to combine services, encouraging the breaking down of boundaries between different services. This will help keep rural services like GP surgeries, pharmacies, post offices and libraries open by enabling them to cooperate, share costs and co-locate in shared facilities

Support local libraries and ensure any libraries under threat of closure are offered first for transfer to the local community.
The biggest mention of libraries I've seen in the manifestos but its still an afterthought with little understanding of what a comprehensive library service requires. When they say "transfer to the local community" what they really mean is donated building and volunteers. This used to be a profession. Sigh.

Film Industry
Support growth in the creative industries, including video gaming, by continuing to support the Creative Industries Council, promoting creative skills, supporting modern and flexible patent, copyright and licensing rules, and addressing the barriers to finance faced by small creative businesses.
The Arts and Culture section of the manifesto is utter garbage to be honest. "We are proud of the arts in Britain and will support them properly" it says without any detail at all. Apart from committing to free museums there's nothing. Sigh again.

Gender Equality
Set an ambitious goal to see a million more women in work by 2020 thanks to more jobs, better childcare, and better back-to work support.

Challenge gender stereotyping and early sexualisation, working with schools to promote positive body image and widespread understanding of sexual consent law, and break down outdated perceptions of gender appropriateness of particular academic subjects.

Work to end the gender pay gap, including with new rules on gender pay transparency.

Continue the drive for diversity in business leadership, maintaining momentum towards at least 30% of board members being women and encouraging gender diversity among senior managers, too. We will work to achieve gender equity in government programmes that support entrepreneurs.
Pretty close to the other parties though the section about positive body image is welcome, especially in schools were it's so often an excuse for bullying. Elsewhere there's a commitment for "swift implementation of the new rules requiring companies with more than 250 employees to publish details of the different pay levels of men and women in their organisation" which is the same as the Tories and on which I once again ask why 250? Why not everyone?  One other thing worth mentioning is that they want to "create a national helpline for victims of domestic and sexual violence – regardless of gender – to provide support, encourage reporting and secure more convictions."  Good.

You can download the whole manifesto here.

https://d3n8a8pro7vhmx.cloudfront.net/libdems/pages/8907/attachments/original/1429028133/Liberal_Democrat_General_Election_Manifesto_2015.pdf?1429028133

Still wouldn't vote for them though.


The Party Manifestos 2015: Plaid Cymru. by Feeling Listless

Politics Having entirely missed the Plaid Cymru manifesto, it's time for some catch up. I really like the cover. Reminds me of the Reyes Pedro poster which was at FACT during the Liverpool Biennial in 2012 (which I wrote about here) (and you can glimpse in the back of this shot).

Anyway, let's get on with the show.

The BBC

We will devolve broadcasting to Wales and implement recommendations on broadcasting made by Plaid Cymru to the Silk Commission.

These include establishing a BBC Trust for Wales as part of a more federal BBC within the UK. Trustees would be appointed by the Welsh Government and the appointment process including public hearings held by the National Assembly for Wales.

Responsibility for S4C, the world’s only Welsh language channel, would transfer to the National Assembly for Wales, as would the funding for the channel that is currently with the Department for Culture, Media and Sport. We will ensure that S4C is adequately funded and that the channel maintains editorial independence. Again, the Welsh Government should appoint the board members of the S4C Authority following public hearings.
Yes ok. Essentially, I think, what they mean is, that the BBC would essentially be split into different national sections rather like the old ITV network. Quite how that would work in relation to the licence fee, I'm not sure. Would enough money come in from Welsh viewers to fund Welsh programmes? Or would it still be a central pool? I've included the S4C stuff because its now shifted under the BBC's wing and this about it being funded by central taxation.

Global Emissions
We should have full powers over our natural resources. We do not accept the imposition of artificial limits on Wales’s responsibility for its own energy generation, whether that be 50MW as at present or those recommended by the Silk Commission.

We will introduce a Climate Change Act for Wales, adopting challenging but achievable greenhouse gas reduction targets for 2030 and 2050.
The ensuing section about renewables is entirely sensible, about phasing out fossil fuels in favour of tidal and hydro sources and the like (presumable in reference to the Tidal Lagoon in Swansea), recycling targets and working with supermarkets on package.

Libraries
"We will create apprenticeships in the field of historical documentation and culture so that staff skills, knowledge and experiences are retained and nurtured."
Arts, Media and Culture gets a single page at the back and doesn't say anything about libraries specifically though the above sentence is interesting. I don't think any of the other parties have a manifesto commitment to archiving.

Film Industry

Nothing which is interesting consider how important film and television production is the Wales now, especially in relation to the BBC's commitment.

Gender Equality





I've posted the whole of this because the gender equality especially in relation to pay is integrated into everything and I especially like the commitment to raise the status of work which happens to predominantly carried out by women, which would presumably include increasing wages.  Also the commitment to the removal of VAT on women's sanitary protection products.  I had no idea that existed.  Why would any humane society do that?

But of course the most eye-catching bit is the photo.  Why on earth did they choose a photo which looks like a splash page for a photo story in Jackie?  What is the bloke whispering to the lady in the background?  What has the lady at the front discovered from reading her smart phone?  Was this taken especially for the manifesto or is it as I suspect clipart?  It's odd.


My Favourite Film of 2000. by Feeling Listless



Film Some films are impossible to return to in their theatrical cuts once you've visited their director's cuts or extended versions and Untitled, the longer version of Cameron Crowe's Almost Famous is one of them. As The AV Club identifies in adding forty minutes, Crowe doesn't really make substantial changes to the story, "much of it in the form of short conversations and anecdotal fragments depicting life on the road. But those small additions make a huge difference, fleshing out the hero’s unrequited attraction to super-groupie Penny Lane (played by Kate Hudson), and showing more clearly how his evolving relationships with the people he was supposed to be covering wound up compromising his objectivity."

After seeing Almost Famous originally at The Filmworks in Manchester, I've since bought the film three times.  Firstly the theatrical, full price mind you, from the old Music Zone in Liverpool's Williamson Square (the company which is now essentially trading as That's Entertainment), then Untitled in Region One with the cardboard cover and additional Stillwater EP via playusa.com when that was thing and then again in R2 in an HMV sale (and I can't remember which one).  The blu-ray's a confusing animal with its extended version on disc, but no extras and the artwork featuring Penny Lane from the theatrical version.  Such are the vagaries of film publishing.  Of all of them, the R1's clearly the most beautiful, with its sepia set images plastered across the interior and some effort made to make the Stillwater cd look like a re-release of some old album.

One of the cornerstones of the Untitled's extras is an epic deleted scene of William Miller (Fugit) convincing his mother to let him go on the trip by playing her Zepplin's Stairway to Heaven as a way of demonstrating that rock music is an artform, has depth. As an addition to the film itself this would have been incredibly brave; having the audience sit and listen to all seven minutes of a record in this way isn't something you'd usually see in a relatively commercial film. In the event we didn't have to, the band denied them the usage. As the wikipedia explains:

"Crowe took a copy of the film to London for a special screening with Led Zeppelin members Jimmy Page and Robert Plant. After the screening, Led Zeppelin granted Crowe the right to use one of their songs on the soundtrack — the first time they had ever consented to this since allowing Crowe to use "Kashmir" in Fast Times at Ridgemont High — and also gave him rights to four of their other songs in the movie itself, although they did not grant him the rights to "Stairway to Heaven"."
And so the scene appears without sound, and a direction so that viewers can begin listening to the song at the correct moment. Now, fifteen years later, someone has cheekily uploaded the scene to YouTube with the song added for speed:



If nothing else this is a reminder to me that perhaps I don't listen to enough music or rather I don't listen to enough music whilst not attempting something else at the same time even if it's simply trying to get to work.  But long ago I made peace with films demanding my sitting time, that I'm more likely to spend ninety-odd minutes in the company of pictures and sound and a narrative.  Which isn't to say I haven't had a similar reaction to Frances McDormand in this scene when faced with something of such inarguable majesty.


The Cosmic Microwave Oven Background by Astrobites

Title: Identifying the source of perytons at the Parkes radio telescope
Authors: E. Petroff, E. F. Keane, E. D. Barr, J. E. Reynolds, J. Sarkissian, P. G. Edwards, J. Stevens, C. Brem, A. Jameson, S. Burke-Spolaor, S. Johnston, N. D. R. Bhat, P. Chandra, S. Kudale, S. Bhandari
First Author’s institution:  Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Australia

Over the past couple of decades the Parkes Radio Telescope in Australia has been picking up two types of mysterious signals, each lasting just a few milliseconds. One kind, the Fast Radio Bursts (FRBs), have come from seemingly random points in the sky at unpredictable times, and are thought to have a (thus far unknown)  astronomical origin. The other kind of signal, perytons, which were named after the mythical winged creatures that cast the shadow of a human, have been found by this paper to have an origin much close to home.

Although the 25 detected perytons are somewhat similar to FRBs, with a comparable spread in frequencies and duration, the author’s suspicions were raised when they noticed that the perytons all happened during office hours, and mostly on weekdays.  When  they corrected for daylight savings, they found that perytons were even more tightly distributed— they mostly came at lunch time. Mostly.

peryton times

Arrival times of perytons (pink) compared with FRBs (blue) at the Parkes Radio Telescope. Real astronomical signals probably don’t all come at lunchtime.

To search for the true origin of the perytons, Petroff et al. took advantage of the fact that the Parkes has just been fitted with a Radio Frequency Interference (RFI) monitor, which continuously scanned the whole sky to detect any background radio sources that might interfere with the astronomical observations.

In the week beginning 19th January 2015 the Parkes radio telescope detected three new perytons. Searching through the RFI data, the authors found that each peryton, with a radio frequency of 1.4GHz, was accompanied by another signal at 2.4GHz. Crucially, they could then compare their results with those from an identical RFI monitor at the nearby ATCA observatory. The 2.4GHz signal was nowhere to be seen in the ATCA data. Not only were the perytons not from space, they had to be coming from somewhere nearby the telescope.

Another clue came when the authors found that, although they had only observed three perytons, there were plenty of 2.4GHz signals in the RFI data that didn’t have an associated peryton. Petroff et al. decided to search for anything that would normally give off 2.4GHz signals, but occasionally emit a 1.4 GHz burst. Suspicion fell on the on-site microwave ovens—not only do they operate at 2.4GHz, the telescope had been pointing in the direction of at least one microwave every time a peryton had been seen.

800px-Microwave_oven

A suspicious-looking microwave. Image Source: Wikimedia Commons

With the suspects cornered, the authors set about trying to create their own perytons. They found that the magnetrons in microwaves naturally emit a 1.4GHz pulse when powering down. Normally this signal is absorbed by the walls of the microwave, but if someone were to open the microwave door before it finished its cycle then the 1.4GHz pulse could escape. Using this technique, the authors were able to generate perytons with a 50 percent success rate. After decades of searching, the source of these mysterious signals had been found.

What about the FRBs? With the perytons confirmed as coming from Earth and not space, doubt was cast on the origin of the FRBs. The authors suggest that the FRBs are astronomical sources, and not linked with the perytons,  for two reasons:

  • The FRBS come at random times and random locations, whereas the perytons were all detected during the day in the general direction of microwaves.
  • The signal from FRBs is consistent with them having traveled through space, with indicators of interaction with interstellar plasma not seen in the perytons.

The authors finish by suggesting a  final test can be made the next time an FRB is observed. If no simultaneous 2.4GHz signal is seen, then it would  conclusively disprove any link between the FRBs and the perytons. What the FRBs really are remains unknown.

 


Dragon Launches to Station, but Falcon Doesn't Stick Landing by The Planetary Society

SpaceX's ISS-bound Dragon spacecraft is in orbit, but the drone ship landing of the company's Falcon 9 rocket was unsuccessful.


Artist's Drive: A Sol 950 Colorized Postcard by The Planetary Society

Amateur image processor Damia Bouic shares the process behind creating stunning panoramas with Curiosity images.


April 14, 2015

The Party Manifestos 2015: The Conservatives. by Feeling Listless

Politics First things first with the Tory manifesto. It has a better cover than last time. George Osborne has his back to us for one thing, but the text is also slightly better balanced even if it's arguably missing some commas and a conjunction. As it stands:

STRONG LEADERSHIP
A CLEAR ECONOMIC PLAN
A BRIGHTER, MORE SECURE FUTURE

because of the comma in the final line looks like a run on sentence. Though I appreciate that the more logical,

STRONG LEADERSHIP,
A CLEAR ECONOMIC PLAN AND
A BRIGHTER, MORE SECURE FUTURE.

is less aesthetically pleasing.  Maybe next time, eh Tories?

But what about the policies?

The BBC
"A free media is the bedrock of an open society. We will deliver a comprehensive review of the BBC Royal Charter, ensuring it delivers value for money for the licence fee payer, while maintaining a world class service and supporting our creative industries. That is why we froze the BBC licence fee and will keep it frozen, pending Charter renewal. And we will continue to ‘topslice’ the licence fee for digital infrastructure to support superfast broadband across the country."
BBC still fucked then. Less money to make programmes that sort of thing.  Top-slicing from the licence fee for something which arguably, iPlayer accepted, has nothing to do with the licence fee.  Certain other media companies will be very pleased with all this.  The section is called We will support our media.  It should actually be We will support certain media because they agree with us politically.

Global Emissions
"We have been the greenest government ever, setting up the world’s first Green Investment Bank, signing a deal to build the first new nuclear plant in a generation, trebling renewable energy generation to 19 per cent, bringing energy efficiency measures to over one million homes, and committing £1 billion for carbon capture and storage. We are the largest offshore wind market in the world. We will push for a strong global climate deal later this year – one that keeps the goal of limiting global warming to two-degrees firmly in reach. At home, we will continue to support the UK Climate Change Act. We will cut emissions as cost-effectively as possible, and will not support additional distorting and
expensive power sector targets."
Keeping commitments but everything elsewhere is disastrous.  Commitments to fracking, investment in oil and nuclear power, "halting the spread of offshore wind farms" because they're unpopular.  There is investment to renewables but support only significant if they offer "value for money" rather than you know, keeping a habitable planet available for us to live on.  Across the two pages about the environment everything is about the business aspects of it rather than what it means for the planet.

Libraries
"We will help public libraries to support local communities by providing free wi-fi. And we will assist them in embracing the digital age by working with them to ensure remote access to e-books, without charge and with appropriate compensation for authors that enhances the Public Lending Right scheme."
Libraries still fucked too then. If they really, as the title of this sections says, "continue to support local libraries", they'd reduce the cuts to local councils who're then closing the buildings and putting staff out of work or replacing them with volunteers in donated spaces.  Free wi-fi in libraries has nothing to do with libraries. Offering remote access to e-books doesn't help either. It's suggests a move towards removing a comprehensive service in favour of directing people to a website.

Film Industry
"The creative industries have become our fastest-growing economic sector, contributing nearly £77 billion to the UK economy – driven in part by the tax incentives for films, theatre, video games, animation and orchestras we introduced. Our support for the film industry has resulted in great British films and encouraged Hollywood’s finest to flock to the UK. We will continue these reliefs, with a tax credit for children’s television next year, and expand them when possible. We will protect intellectual property by continuing to require internet service providers to block sites that carry large amounts of illegal content, including their proxies. And we will build on progress made under our voluntary anti-piracy projects to warn internet users when they are breaching copyright. We will work to ensure that search engines do not link to the worst-offending sites."
Except the "support" is through the National Lottery funnelled through the BFI, so there's a bit of a grey area on that though there are the tax breaks, I'll give them that, even if they're not as generous as some.  But overall the cuts to the across the board pretty much invalidate anything written here since the film industry is at its strongest when its part of an cultural ecosystem.  There's much talk of how they're keeping museums and galleries free to enter whilst quietly glossing over the damage done to back offices because of the cuts.

Gender Equality
"We now have more women-led businesses than ever before, more women in work than ever before and more women on FTSE 100 boards than ever before. We want to see full, genuine gender equality. The gender pay gap is the lowest on record, but we want to reduce it further and will push business to do so: we will require companies with more than 250 employees to publish the difference between the average pay of their male and female employees. Under Labour, women accounted for only one in eight FTSE 100 board members. They represent a quarter of board members today and we want to see this rise further in the next Parliament. We also want to increase the proportion of public appointments going to women in the next Parliament, as well as the number of female MPs."
Why not all businesses? Why do they have to be over 250 employees? All looks a bit weak I'm afraid and nothing as definite as is could and should be. Women's issues (for want of a better phrase) (sorry) are mentioned throughout the manifesto but in a lot of cases its in the realm of "We don't do this already?" When they say things like "We have made protecting women and girls from violence and supporting victims and survivors of sexual violence a key priority" all I can think is, "and why wouldn't you?"

Anyway, here's a direct link to the manifesto:

https://s3-eu-west-1.amazonaws.com/manifesto2015/ConservativeManifesto2015.pdf

Still wouldn't vote for them though.


The Party Manifestos 2015: The Green Party. by Feeling Listless

Politics As I type this, the Green Party launch is live on television. Has this happened before? Across all the news channels? That's the Green surge. In some polls they're neck and neck with the Lib Dems, which some would suggest says more about the Lib Dems than the Green.  But having these policies in the public realm, being discussed in the mainstream is a good thing, even if most of them have little chance to be put into practice. It causes people to question the other politics with which they're being presented.

They've also produced a BSL version:



Now on to the policies. The pdf of the Manifesto doesn't have a search function and also won't copy/paste the text which is why this is all going to look a bit ropey.

The BBC



Vague and deliberately so. The Greens' media spokesman made a poor showing on The Media Show the other week were he seemed to suggest that The Greens would abolish the licence fee which they view as a poll tax for something in the region of central taxation so that richer people would pay more than poor people though as I think presenter Steve Hewlett noted if its being funded directly by the government that suggests the potential for more interference not less.  As a side note, I'd also point out that if rich people are paying more towards the BBC, there's more likely to be an objection by them to the BBC making the kinds of the programmes they're less interested in.  The licence fee creates a level playing field in those terms.  Hewlett also noted that their indication that "no individual or company owns more than 20% of the media market" is a bit vague too in that it does actually say how that would be measured.  The spokesman couldn't explain that either. [Updated later: The Guardian has a clearer description of their proposal. Still doesn't make any kind of sense.]

Global Emissions



Which is what climate change policies from all parties should look like. But here's my favourite part of the manifesto. They actually bother to explain the science:



Amazing.

Libraries



No problems here.  My entire sector's been fucked since 2010 if not earlier.  The only problem I'd see is organisations being convinced to start paying people for work again now that they're used to utilising volunteer and intern labour.  As the architect Frank Geary once said when asked why his practice was so small, it was because he would rather pay people because free labour becomes an addition (I'm paraphrasing from this).

Film Industry



Nothing specific though the above is interesting. It's an initiative to make sure that people working the creative industries aren't taken advantage of and that is important in relation to film where there are loads of low budget productions that exist because of the good will of participants.

Gender Equality



There are pages and pages and not just here - as with the environment, equality is threaded through all of their policies.  So it's easier just to highlight the section of the contents page. Oddly some of it isn't quite as on point as the Labour manifesto. They say they'll promote equal pay but don't go as far as to shame companies into it or some other approach. What's perhaps most interesting is its placement in the manifesto right after environmental policies and above the NHS. They pretty much understand that if you sort out the environment and the inequalities in society everything else becomes much, much easier.

Here's a direct link to a pdf of the whole manifesto (they didn't produce a printed version because it wouldn't be environmentally conscious).

https://www.greenparty.org.uk/assets/files/manifesto/Green_Party_2015_General_Election_Manifesto.pdf

I'm voting for them.


Elizabeth Wurtzel on The Good Wife. by Feeling Listless

TV Writing for The Guardian for the first time since 2010, Wurtzel explains why it should be lauded just as much as Mad Men if not more so. Mild spoilers unless you've already worked out the Hogwarts like structure of the series (or indeed aren't at the beginning of season three like me):

"From the first episode, when Alicia was a first-year associate making her way at Lockhart-Gardner, I was rooting for her. I root for her when she is wrong and awful, which is all the time. Alicia is difficult and demanding and unfair – with herself and with everyone else – because she would rather be right than nice. The Good Wife invents a rare female character: Alicia is not interested in good intentions because they have nothing to do with the correct result, in fact they are the enemy of it."
Alicia's the focus of the piece but Kalinda's probably The Good Wife's secret weapon. Whenever a story is flagging sometimes, or the legal stuff is a bit thin and the writers clearly know it, they'll throw in a scene where Kalinda does something completely outrageous and we're back. Incidentally, the lift scene at the end of season 2 is one of the best pieces of television as a visual medium I think I've seen.  Onward into season 3.


Discovering the mysterious companions of Cepheids by Astrobites

Title: Discovery of blue companions to two southern Cepheids: WW Car and FN Vel

Authors: V. Kovtyukh, L. Szabados, F. Chekhonadskikh, B. Lemasle, and S. Belik

First Author’s Institution: Astronomical Observatory, Odessa National University

Status: Published in MNRAS

Background

Figure 1: Also Figure 1 from the paper. This shows the spectra of a Cepheid without a companion (bottom) and the spectrum of a B-star (top spectrum).  As we can see, the Balmer line from the B-star nearly overlaps with the Ca II H line of the Cepheid.

Figure 1: This shows the spectra of a Cepheid without a companion (bottom) and the spectrum of a B-star (top spectrum). As we can see, the Balmer line from the B-star nearly overlaps with the Ca II H line of the Cepheid. This is also Figure 1 from the paper.

Cepheid variable stars are perhaps most famous for serving as standard candles in the cosmic distance ladder. The periods of their variability are related to their luminosities by the period-luminosity relationship (now sometimes also called Leavitt’s law). This was first discovered by Henrietta Swan Leavitt over a hundred years ago but this relationship is something astronomers are still working to improve today. Astrobites has also discussed Cepheids and other variable stars and the cosmic distance ladder in previous posts.

Though the physics behind Cepheid variability is well-understood, we still have significant difficulties to overcome in order to improve the zero-point of Leavitt’s law. Cepheids are supergiants. They are stars several times the mass of our Sun that have evolved off of the main sequence of the stellar color-magnitude diagram (in other words they’re in the stellar ‘afterlife’). Because bigger stars burn their fuel faster than smaller stars, Cepheids are also young stars. Thus they are often found in the dusty regions of galaxies so we have to deal with absorption, reddening, and dust scattering when we observe them. The period-luminosity relationship may also have a dependence on metallicity (the fraction of atoms in the star that are heavier than helium) that we still don’t fully understand.

Another common problem that we face when using Cepheids—and the focus of today’s paper!—is the presence of a binary companion.In fact, more than 50% of Galactic Cepheids are expected to have at least one companion. The number of Cepheids with binary companions is so high that we can’t deal with them by simply throwing out the ones that have companions. Separating the luminosity of the Cepheid from its companion is important if we want to use the period luminosity relationship. Cepheids, like most stars, are often found in binary systems.

Fortunately, since binary systems are incredibly useful in astronomy in their own right, astronomers have many ways of detecting them using spectroscopy, photometery, and astrometry. For one thing, observations of binary stars are pretty much the only way we can directly determine the masses of stars that aren’t the Sun. Today’s astrobite, however, focuses on a method of detecting binaries that is useful specifically in finding the companions of Cepheids.

Methods

Figure 2: Figure 3 from the paper, showing the spectra of the two Cepheids for which they discovered blue companions. We can see that the Ca II H line on the right is

Figure 2: Figure 3 from the paper, showing the spectra of the two Cepheids for which they discovered blue companions. We can see that the Ca II H line on the right is “deeper” than the Ca II K line on the left.

The authors of today’s paper used the “calcium-line method” (we’ll elaborate more on this in a minute) to study 103 southern Cepheids. This allowed them to identify a new blue binary companion to the Cepheid WW Car and independently confirm another recent discovery of a blue binary companion to the Cepheid FN Vel. They were also able to extract the known blue companions of eight other Cepheids.

To do this, they used spectra taken with the MPG telescope and FEROS spectrograph and focused on the 3900-4000 angstrom range to study the depth of the calcium lines Ca II H (3968.47 angstroms) and Ca II K (3933.68 angstroms) and also the Hϵ line (3970.07 angstroms) of the Balmer series. In Cepheids without blue companions, the depths of the two calcium lines will be equal (see Figure 1). However, if the Cepheid has a blue (hot) companion, then the Hϵ line of the blue companion will be superimposed with the Ca II H line, causing it to be deeper than the Ca II K line. Thus, by studying the relative depths of these two lines, they were able to identify Cepheids with hot companions. This method works for companion stars that are hotter than AV3 stars (hence the “blue” moniker in the title) so they were also able to see the increased line depth in Cepheids with previously-discovered hot companion stars. To make sure that they were not affected by lines from Earth’s atmosphere, the authors also made observations of blue stars at the same time as their Cepheid observations. Spectra for WW Car and FN Vel can be seen in Figure 2.

They then search the literature for other evidence of binary companions for WW Car and FN Vel. FN Vel’s binary companion had been discovered independently and WW Car had previously been suspected of having a binary companion because of its photometry. WW Car’s pulsation period also seemed to display a long-term sinusoidal variation, which the authors speculate could be attributed to the light-time effect that is sometimes seen in binary systems. This is caused by the light traveling different distances to reach us as the Cepheid orbits about the system’s center of mass. Danish astronomer Ole Rømer actually used this to calculate the speed of light in the 1600s!

Figure 3:  The O - C Diagram for WW Car (Figure 4 from the paper). We can see the sinusoidal variations in period that the authors speculate may be caused by the light-time effect of a binary system.

Figure 3: The O – C Diagram for WW Car (Figure 4 from the paper). We can see the sinusoidal variations in period that the authors speculate may be caused by the light-time effect of a binary system.

WW Car’s varying periodicity can be seen in it’s O – C diagram (observed minus calculated) in Figure 3.  An O – C diagram shows the difference in time from when we expect to see a certain phase of the Cepheid (the ‘calculated’ part of the diagram) and when we actually see it (the ‘observed’ part of the diagram). If the period isn’t changing we’d see points that would be best fit by a horizontal line because the dates that we expect for the Cepheid to be at a certain part of its phase–say the maximum–would be about the same as when we actually see the maximum observationally. In WW Car’s case, they see this wave-like structure in the O – C diagram, so they suggest that this could be more evidence for the existence of the blue companion they detected in their analysis of its calcium lines.

There are, however, a few limitations to the calcium-line method. For example, the Cepheid observations were generally made at the peak of their light curves to give them the highest signal-to-noise ratio possible, but the best way to search for Cepheid companions is to look when the Cepheid is at the minimum of its light curve. This particular method, as noted earlier, is also effective for stars that are hotter than the AV3 spectral type, so it wouldn’t work on less massive companions. Nevertheless, as the author’s of today’s paper have shown, looking at the relative depths of a Cepheid’s calcium lines can be a very effective way of uncovering the secret companions of Cepheids.


United Launch Alliance Pulls Back Curtain on New Rocket by The Planetary Society

ULA revealed its new Vulcan rocket system today, an Atlas and Delta mashup the company says will increase power, lower costs and broaden mission capabilities.


PROCYON update: Asteroid 2000 DP107 target selected, ion engine stopped by The Planetary Society

PROCYON (PRoximate Object Close flYby with Optical Navigation) is a microsatellite that launched on December 3 as a secondary payload with Hayabusa 2. The mission has now selected their asteroid flyby target -- a binary asteroid named 2000 DP107 -- but is reporting a problem with their ion engines.


LightSail Launch Delayed until at least May 20 by The Planetary Society

The Planetary Society’s LightSail spacecraft will have to wait at least two more weeks before setting sail on its maiden voyage.


Planetary Defense Conference: Steps to Prevent Asteroid Impact by The Planetary Society

From Italy, Bruce Betts gives background and information at the start of the Planetary Defense Conference, which addresses the asteroid threat. Bruce summarizes steps to prevent asteroid impact.


April 13, 2015

The Party Manifestos 2015: Labour. by Feeling Listless

Politics Yes indeed. In an attempt to be relevant I thought I'd test each of the general election manifestos in the five key areas I'm interested in and which clearly won't get the same coverage as some other things even though they're arguably just as important. They're also interrelated to some degree. Today, Labour published their initial coalition negotiation document [joke (c) several dozen people on twitter]. Let's see how convincing they are.

The BBC

Our system of public service broadcasting is one of Britain’s great strengths. The BBC makes a vital contribution to the richness of our cultural life, and we will ensure that it continues to do so while delivering value for money. We will also commit to keeping Channel 4 in public ownership, so it continues to produce vital public content.
Bit too fucking ambiguous I'm afraid. Doesn't say "will protect the license fee" which is what I would have been looking for.  Plus the opening chunk of that section talks about media plurality and "no one media owner should be able to exert undue influence on public opinion and policy makers" which could be seen as having the potential to diminish the BBC through the back door.  Too, too vague.

Global Emissions
We will put climate change at the heart of our foreign policy. As the terrible impact of the floods in Britain showed last year, climate change is now an issue of national, as well as global security. From record droughts in California, to devastating typhoons in the Philippines, the world is already seeing the effects we once thought only future generations would experience.

The Intergovernmental Panel on Climate Change has made clear that if the world is going to hold warming below two degrees (the internationally agreed goal), global emissions need to peak in around 2020, and then decline rapidly to reach net zero emissions by the second half of this century. The weaker the action now, the more rapid and costly the reductions will need to be later.

The effects of climate change hit the poor, the hardest. If we do not tackle climate change, millions of people will fall into poverty. We will expand the role of the Department of International Development to mitigate the risks of a changing climate, and support sustainable livelihoods for the world’s poorest people.

We want an ambitious agreement on climate change at the UNFCCC conference in Paris, in December. We will make the case for ambitious emissions targets for all countries, strengthened every five years on the basis of a scientific assessment of the progress towards the below two degree goal. And we will push for a goal of net zero global emissions in the second half of this century, for transparent and universal rules for measuring, verifying and reporting emissions, and for an equitable deal in which richer countries provide support to poorer nations in combatting climate change.
All of which is pretty strong language and laudable as it stands.  Except the rest of the manifesto is a clusterfuck for the environment. Examples:
Following the Davies Review, we will make a swift decision on expanding airport capacity in London and the South East, balancing the need for growth and the environmental impact.
Hedging are we? Not simply saying no?
"For onshore unconventional oil and gas, we will establish a robust environmental and regulatory regime before extraction can take place. And to safeguard the future of the offshore oil and gas industry, we will provide a long-term strategy for the industry, including more certainty on tax rates and making the most of the potential for carbon storage."
Err ok. Still a commitment to fossil fuels then. Also fracking isn't mentioned at all.

Libraries

Not mentioned specifically.

Film Industry

Not mentioned specifically.

Gender Equality
The next Labour Government will go further in reducing discrimination against women, requiring large companies to publish their gender pay gap and strengthening the law against maternity discrimination. Where there is evidence more progress is needed, we will enforce the relevant provisions within the Equality Act.
That's pretty good actually. If you can't win the moral argument, shame them into it. Good, good.

"Women" are mentioned fifteen times and there's a general sense of providing extra protection for women through laws against domestic violence, " the indefinite detention of people in the asylum and immigration system, ending detention for pregnant women and those who have been the victims of sexual abuse or trafficking" and increasing investment in care. It's all pretty impressive to be honest.
This commitment to universal human rights will be at the heart of our foreign policy across the world. We will continue to promote women’s rights. We will join with those campaigning to attain gender equality, the eradication of poverty and inclusive economic growth. We will appoint a Global Envoy for Religious Freedom, and establish a multi-faith advisory council on religious freedom within the Foreign and Commonwealth Office. And we will appoint an International LGBT Rights Envoy to promote respect for the human rights of LGBT people, and work towards the decriminalisation of homosexuality worldwide.
No problem with much of that either even if it's a bit short on actual detail. Since it's in there let's ask what a Global Envoy for Religious Freedom actually do and would their remit include the freedom to not have a religion since atheists and non-denominational spiritualists are being persecuted too.

You can read the whole manifesto here.

Still wouldn't vote for them though.


Talks Collection: Professor Alice Roberts. by Feeling Listless

Science Professor Alice Roberts doesn't need much of an introduction.  She's a clinical anatomist and Professor of Public Engagement in Science at the University of Birmingham and also a busy television presenter, debuting on "Extreme Archaelogy" back in 2004 through Time Team, Coast and Horizon as well as her masterpiece, Origins of Us.

Let's begin with a couple of videos in which Professor Roberts talks about why she's a scientist. Firstly, for the Science: [So What? So Everything] campaign:



Introducing a monthly column for The Observer:



For Longform, an impressionistic portrait piece:


'Where i went right': ALICE ROBERTS from Longform on Vimeo.

A BBC Knowledge showreel:


Showreel - Alice Roberts BBC Knowledge Interview from Phil Bowman on Vimeo.


When Professor Roberts became a professor at the University of Birmingham she gave the opening lecture of that year's Darwin Day celebrations, elaborating on ideas from her tv series Origins of Us:



Later that year she offered a talk at Bournemouth University about public engagement:



As part of her professorship at Birmingham she's also presented these short pieces about studying at the university, other things:






"An exhibition at the British Museum features sculptures made up to 40,000 years ago. Professor Alice Roberts meets curator Jill Cook to discuss three artefacts in the collection; the Lion Man, a group of female figurines from Siberia, and the oldest known flute":




At the Times Cheltenham Science Festival, Alice demonstrating "a novel and beautiful way of demonstrating anatomy":




Here she is being mighty on Newsnight in relation to creationism being taught as part of science teaching:



And on The Daily Politics:




The Young People's Forum at the Think Tank Museum in Birmingham produced this short documentary for which Alice is a contributor:




In 2009, Alice presents a "mini documentary about the discovery and current use of Magnetic Resonance Imaging (MRI) for the Medical Research Council":



Also from 2009, here's a sample entry created for Famelab, a competition to find new communicators of science in the media. It's about the similarities between three skulls:




The Royal Institution asked a group of celebrities to talk about their favourite chemical element. Here's Alice on Calcium:




Here's an excerpt from Dinosaurs: The Facts and Fiction in which she visits Crystal Palace and discovers how bones are married together:




Finally, here's her episode of Robert Llewellyn's series Carpool:


April 12, 2015

We Need To Talk About Matt Murdoch. by Feeling Listless

TV Or do we? Even before Netflix uploaded Daredevil at 8am GMT, news sites were posting reviews of the pilot or the first five episodes released to journalists and IGN has posted thorough interrogations of every episode of a quality which actually made me want to watch the whole thing again with a fresh eye despite having binged my way through all thirteen episodes on Friday eventually going to bed about about 1am.  I won't yet.  I'm just near the end of the second series of The Good Wife and a bit worried about Kalinda.  Here are a few brief comments anyway.

It probably doesn't need to be said that you should have seen the whole thing before reading any of the below.  While it's not going to be as lengthy a post as the one about earlier exploits in the MARVEL universe, Daredevil is still the kind of work which must be seen without much in the way of foreknowledge even if, I suspect, you're a fan of the comic.  Part of the relish fans of the books will have is in seeing which parts of the story have been chosen to be included and what's been changed.  So yes, go back to the binge now.

The first thing to say, I think I have about five things to say, and the first thing to say is that as a work it isn't as earth shattering as either of last year's films.  There are no MCU changing events here in a similar way to The Winter Soldier impacting on Agents of SHIELD and it continues the message from The Guardians of the Galaxy that MARVEL is the shit.  The shocks and twists are internally within the series because we like the characters and are involved in this story rather than wider concerns of the impact they might have in the wider universe.  Apart from one element which I'll talk about later.  I think it's going to be last.

If anything the announcement of its existence was the seismic moment, deciding to make five series minimum for Netflix which interrelate with each other and the wider universe.  Even before light hit sensor on the 4K cameras you could feel the swagger in the MARVEL's step and Guardians hadn't even been released and we'd experienced the moment when it seemed like the studio could do what it liked about anything.  A cartoon series has recently been announced set in the universe and the reaction's been pretty much, well, yes of course.  I'm hoping for Squirrel Girl.  Finally.

The biggest surprise for me was how my expectations were and weren't met.  Having been watching The Good Wife for weeks and with the 2003 Affleckathon in my head, I'd expected a relatively generic courtroom show which just happens to also have its lead character fighting crime in his off hours or indeed taking the law into his own hands when his abilities as a lawyer had failed him.  I'd also expected loads more superheroes and villains and certainly an appearance from Elektra with also perhaps a jokey episodes where he meets Jennifer Walters or some such.  Boston Legal with powers.

Instead, it's a single story spread across thirteen episodes, with the Kingpin as the single antagonist and aspirations towards The Wire and the Nolan Batmans.  Which is a different approach which also works.  This is the kind of mature, adult drama Torchwood always aspired to be within the Whoniverse and SHIELD can never be.  Daredevil isn't a courtroom show because the conceit is that he and his partner don't have any clients.  They're too poor, too green, too new to have any of that.  It's the antithesis of The Good Wife.

It's also tonally completely unlike most superhero series we've seen before.  In places the pacing is leisurely.  Some scenes last for minutes upon minutes, with lengthy speeches and theological, existential discussions brimming with import.  It's almost filmed theatre or at least old Hollywood adaptations of theatre and when there is crosscutting between parallel scenes its generally only when the writer is making a point about something rather than through narrative necessity.  It's mesmerising.

Fight scenes matter.  They hurt.  When Daredevil is creamed, as he is on multiple occasions, he's out of action until he's able to fight again.  Some episodes go by in which the only fight scenes happen as part of some intricate flashback structure and as part of the back story rather than for the sake of it.  Indeed pretty much every fight shown has some narrative or character importance.  The incidental, reputational pieces of daring do happen off screen, as when Claire Temple (Rosario Dawson) talks about treating the aftermaths of his fights.

And at least the series tries to subvert gender politics.  On a couple of occasions there is some straight up damselling of female characters but on neither occasion do we find female characters who simply sit around waiting for be rescued.  But again its also import to note how when they do take action, with everyone else, it has consequences.  When the shockingly shocking, amazing thing happens at the end of episode eleven, it isn't shrugged off and we see it's emotional toll.  Not the first time you've fired a gun?  Wow.

But the show is still connected with the MARVEL universe.  The US version of Gen of Deek has an amazingly thorough few pages noting what they call "easter eggs" from across all the episodes mostly from the comics but also in relation to the rest of the MCU.  The emergence of The Absorbing Man from SHIELD as Murdock's father's final boxing opponent is well publicised, but they notice that the orphanage that Murdock was brought up in as a child is the same establishment which housed Skye from SHIELD at what has to be roughly the same time.  They may well have met.

I've already seen grumblings online that Dawson's character is underused.  But of course as with loads of elements of the series, we have to look at it in the context of the other upcoming series, and in the comics Temple is the ex-wife of Luke Cage who has his own series coming as part of this run which suggests she'll return for that.  As ever with an MCU property you must always view everything both as a narrative incident in and of itself and also as part of the wider universe.  Given everything which upcoming at the cinema, it's inconceivable that the Netflix series will be a work in and of themselves too.

Which brings me to the last point, which is Wilson Fisk's cufflinks.  Within the series, they're used as a symbolic connecting tissue between Fisk and his father.  But the design of them and the shape and the frequency with which they appear in close-up on screen makes me wonder if they'll have some wider significance, if perhaps and I'm probably overexcited but nevertheless, they'll be revealed to be one of the infinity stones, Netflix's contribution to the Infinity War.  There's a useful primer in this graphic and the "soul" gem would seem to fit the bill...


On the Great Filter, existential threats, and griefers by Charlie Stross

So IO9 ran a piece by George Dvorsky on ways we could wreck the solar system. And then Anders Sandberg responded in depth on the subject of existential risks, asking what conceivable threats have big enough spatial reach to threaten an interplanetary or star-faring civilization.

This, as you know, is basically catnip for a certain species of SF author. And while I've been trying to detox in recent years, the temptation to fall off the wagon is overwhelming.

The key issue here is the nature of the Great Filter, something we talk about when we discuss the Fermi Paradox.

The Fermi Paradox: loosely put, we live in a monstrously huge cosmos that is rather old. We only evolved relatively recently -- our planet is ~4.6GYa old, in a galaxy containing stars up to 10GYa old in a universe around 13.7GYa old. Loosely stated, the Fermin Paradox asks, if life has evolved elsewhere, then where is it? We would expect someone to have come calling by now: a five billion year head start is certainly time enough to explore and/or colonize a galaxy only 100K light years across, even using sluggish chemical rockets.

We don't see evidence of extraterrestrial life, so, as economist Robin Hanson pointed out, there must be some sort of cosmic filter function (The Great Filter) which stops life, if it develops, from leaving its star system of origin and going walkabout. Hanson described two possibilities for the filter. One is that it lies in our past (pGF): in this scenario, intelligent tool-using life is vanishingly rare because the pGF almost invariably exterminates planetary biospheres before they can develop it. (One example: gamma ray bursts may repeatedly wipe out life. If this case is true, then we can expect to not find evidence of active biospheres on other planets. A few bacteria or archaea living below the Martian surface aren't a problem, but if our telescopes start showing us lots of earthlike planets with chlorophyll absorption lines in their reflected light spectrum (and oxygen-rich atmospheres) that would be bad news because it would imply that the GF lies in our future (an fGF).

The implication of an fGF is that it doesn't specifically work against life, it works against interplanetary colonization. The fGF in this context might be an emergent property of space exploration, or it might be an external threat -- or some combination: something so trivial that it happens almost by default when the technology for interstellar travel emerges, and shuts it down for everyone thereafter, much as Kessler syndrome could effectively block all access to low Earth orbit as a side-effect of carelessly launching too much space junk. Here are some example scenarios:

Simplistic warfare: As Larry Niven pointed out, any space drive that obeys the law of conservation of energy is a weapon of efficiency proportional to its efficiency as a propulsion system. Today's boringly old-hat chemical rockets, even in the absence of nuclear warheads, are formidably destructive weapons: if you can boost a payload up to relativistic speed, well, the kinetic energy of a 1Kg projectile traveling at just under 90% of c (τ of 0.5) is on the order of 20 megatons. Slowing down doesn't help much: even at 1% of c that 1 kilogram bullet packs the energy of a kiloton-range nuke. War, or other resource conflicts, within a polity capable of rapid interplanetary or even slow interstellar flight, is a horrible prospect.

Irreducible complexity: I take issue with one of Anders' assumptions, which is that a multi-planet civilization is largely immune to the local risks. It will not just be distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. I've rabbited on about this in previous years: briefly, I doubt that we could make a self-sufficient habitat that was capable of maintaining its infrastructure and perpetuating and refreshing its human culture with a population any smaller than high-single-digit millions; lest we forget, our current high-tech infrastructure is the climax product of on the order of 1-2 billion developed world citizens, and even if we reduce that by an order of magnitude (because who really needs telephone sanitizer salesmen, per Douglas Adams?) we're still going to need a huge population to raise, train, look after, feed, educate, and house the various specialists. Worse: we don't have any real idea how many commensal microbial species we depend on living in our own guts to help digest our food and prime our immune systems, never mind how many organisms a self-sustaining human-supporting biosphere needs (not just sheep to eat, but grass for the sheep to graze on, fungi to break down the sheep droppings, gut bacteria in the sheep to break down the lignin and cellulose, and so on).

I don't rule out the possibility of building robust self-sufficient off-world habitats. The problem I see is that it's vastly more expensive than building an off-world outpost and shipping rations there, as we do with Antarctica -- and our economic cost/benefit framework wouldn't show any obvious return on investment for self-sufficiency.

So our future-GF need not be a solar-system-wide disaster: it might simply be one that takes out our home world before the rest of the solar system is able to survive without it. For example, if the resource extraction and energy demands of establishing self-sufficient off-world habitats exceed some critical threshold that topples Earth's biosphere into a runaway Greenhouse effect or pancakes some low-level but essential chunk of the biosphere (a The Death of Grass scenario) that might explain the silence.

Griefers: suppose some first-mover in the interstellar travel stakes decides to take out the opposition before they become a threat. What is the cheapest, most cost-effective way to do this?

Both the IO9 think-piece and Anders' response get somewhat speculative, so I'm going to be speculative as well. I'm going to take as axiomatic the impossibility of FTL travel and the difficulty of transplanting sapient species to other worlds (the latter because terraforming is a lot harder than many SF fans seem to believe, and us squishy meatsacks simply aren't constructed with interplanetary travel in mind). I'm also going to tap-dance around the question of a singularity, or hostile AI. But suppose we can make self-replicating robots that can build a variety of sub-assemblies from a canned library of schematics, building them out of asteroidal debris? It's a tall order with a lot of path dependencies along the way, but suppose we can do that, and among the assemblies they can build are photovoltaic cells, lasers, photodetectors, mirrors, structural trusses, and their own brains.

What we have is a Von Neumann probe -- a self-replicating spacecraft that can migrate slowly between star systems, repair bits of itself that break, and where resources permit, clone itself. Call this the mobile stage of the life-cycle. Now, when it arrives in a suitable star system, have it go into a different life-cycle stage: the sessile stage. Here it starts to spawn copies of itself, and they go to work building a Matrioshka Brain. However, contra the usual purpose of a Matrioshka Brain (which is to turn an entire star system's mass into computronium plus energy supply, the better to think with) the purpose of this Matrioshka Brain is rather less brainy: its free-flying nodes act as a very long baseline interferometer, mapping nearby star systems for planets, and scanning each exoplanet for signs of life.

Then, once it detects a promising candidate -- within a couple of hundred light years, oxygen atmosphere, signs of complex molecules, begins shouting at radio wavelengths then falls suspiciously quiet -- it says "hello" with a Nicoll-Dyson Beam.

(It's not expecting a reply: to echo Auric Goldfinger: "no Mr Bond, I expect you to die.")

A Dyson sphere or Matrioshka Brain collects most or all of the radiated energy of a star using photovoltaic collectors on the free-flying elements of the Dyson swarm. Assuming they're equipped with lasers for direct line-of-sight communication with one another isn't much of a reach. Building bigger lasers, able to re-radiate all the usable power they're taking in, isn't much more of one. A Nicoll-Dyson beam is what you get when the entire emitted energy of a star is used to pump a myriad of high powered lasers, all pointing in the same direction. You could use it to boost a light sail with a large payload up to a very significant fraction of light-speed in a short time ... and you could use it to vapourize an Earth-mass planet in under an hour, at a range of hundreds of light years.

Here's the point: all it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare).

We have plenty of griefers who like destroying things, even things they've never seen and can't see the point of. I think the N-D laser/Von Neumann Probe option is a worryingly plausible solution to the identity of a near-future Great Filter: it only has to happen once, and it fucks everybody.

What other fGF scenarios can you think of that don't require magical technology or unknown physics and that could effectively sterilize a galaxy, starting from a one-time trigger event?


NASA's Mission to Europa May Get More Interesting Still by The Planetary Society

NASA officials have asked their European counterparts if they would like to propose contributing a small probe to NASA's Europa mission planned for the mid-2020s.


April 11, 2015

Who created Monopoly? by Feeling Listless

Games One my most vivid television memories is from the early eighties and the educational programme Eureka, which was a series about inventions. I remember an episode about the creation of the board game Monopoly, of a man becoming very excited after noticing the patterning on his table cloth and turning it into the familiar game board with properties and such.

The BBC Genome now tells me this was broadcast on BBC Two England on 25 November 1982 at 6.10pm and was written by Jeremy Beadle and Clive Doig and was the story of how "Charles Darrow got his inspiration for the game Monopoly from Atlantic City in America."

Except their version of events was wrong or at least glossed over somewhat the contribution of an earlier inventor.

As this book extract from The Guardian explains, "in 1903, a leftwing feminist called Lizzy Magie patented the board game that we now know as Monopoly – but she never gets the credit":

"To Elizabeth Magie, known to her friends as Lizzie, the problems of the new century were so vast, the income inequalities so massive and the monopolists so mighty that it seemed impossible that an unknown woman working as a stenographer stood a chance at easing society’s ills with something as trivial as a board game. But she had to try.

"Night after night, after her work at her office was done, Lizzie sat in her home, drawing and redrawing, thinking and rethinking. It was the early 1900s, and she wanted her board game to reflect her progressive political views – that was the whole point of it."
Mary Pilon, the author of the book has also written this opinion piece on the debacle.  Little by little, the history we thought we knew based on patriarchal propaganda is slowly being rewritten.


Curiosity update, sols 896-949: Telegraph Peak, Garden City, and concern about the drill by The Planetary Society

Since I last wrote about Curiosity drilling at Pink Cliffs, the rover has visited and studied two major sites, drilling at one of them. It has also suffered a short in the drill percussion mechanism that presents serious enough risk to warrant a moratorium on drill use until engineers develop a plan to continue to operate it safely.


To Recover First Stage, Just Read the Instructions by The Planetary Society

SpaceX is gearing up for a second attempt to land the spent first stage of a Falcon 9 rocket on a ship in the Atlantic Ocean.


April 10, 2015

Knock on Torchwood. by Feeling Listless

TV There were rumours of a radio return but it looks like we'll never see a resolution to the end of "Miracle" Day. The Backlot asks the question:

I’d be remiss if I didn’t ask about Torchwood. Every few months there seems to be another story about how we may get more Jack Harkness. What’s happening?

RTD: I know! Bless that show, but I’m afraid not. John Barrowman is busy on Arrow. I can’t believe he’s actually joined a show that’s got his name inserted in the title. [laughs] Just call it Barrowman and be done with it! But I’m afraid no plans at the moment. But anything can happen because it’s a funny old world. But I know what I’m doing for the next two or three years and it doesn’t involve Torchwood. But who knows, the BBC is going in a different direction so who knows. Keep your fingers crossed.
Well, ish. Blames Barrowman for being busy (not that this stopped him ever) and doesn't rule out Torchwood without RTDs involvement exactly, but yes, that looks like it could be it. For now. Possibly.


The 0% Take Rate Marketplace Valuation Conundrum by Albert Wenger

I wrote a post not that long ago about how take rate (or rake) is one vector for disrupting incumbent marketplaces. Recently I have seen a number of startups that have taken this to its logical extreme: charge nothing at all. Several of these companies are growing very rapidly. But there is an important catch here: how should they be valued?

Several of the startups in question are raising rounds based on a GMV metric, i.e. the size of their marketplace. Now we have a lot of experience with marketplaces and therefor have a fair number of historical valuations for comparison. Those give us some sense of what a reasonable multiple on GMV would look like *based* on the take rate for rates as low as 20 bps (basis points), i.e. 0.2% take rate.

But 0% take rate? That gets a lot harder. The argument that these startups make is something like “we are optimizing for growth now and will charge later and are targeting a take rate of x%” – as you would expect though valuations are quite sensitive to what x actually winds up being. There is a big difference between 20bps and 5% (25x to be precise).

So if this is your strategy then I have two recommendations. First, don’t wait too long to start charging – it will take you time to figure out how to get it right. Second, base your valuation on a lower x than you think you can reasonably achieve. Otherwise you run a high risk of finding yourself in the post-money trap.


Protogalaxy Collisions Birthing Supermassive Black Holes by Astrobites

Title: Direct collapse black hole formation via high-velocity collisions of protogalaxies
Authors: Kohei Inayoshi, Eli Visbal, Kazumi Kashiyama
First Author’s institution: Columbia University
Status: Accepted to The Astrophysical Journal

Quasars are some of the brightest objects in the sky, powered by accretion of materials onto black holes in the centers of galaxies. Quasars at high redshifts (z >~6) alluded to the existence of supermassive black holes (SMBHs) with masses >~ a few x 109 M within the first billion years after the Big Bang (z ~ 6 corresponds to ~ 1 Gyr after the Big Bang). How were such massive black holes able to form in such a short time? Our Solar System, for instance, only formed ~ 9 Gyr after the Big Bang. This question continues to perplex us till this day.

Cooking up Supermassive Black Holes: Three Different Recipes

Astronomers, creative as they are, came up with three mechanisms by which this might happen. The first and simplest scenario is gas accretion onto high mass (10-100 M) black holes left behind by the first generation of stars. There is an upper limit to this accretion rate which is set by the balance between the gravitational force of the infalling materials and the repulsive radiative force from the black hole accretion disks. This limit is known as the Eddington limit. In order to grow 109 M black holes within the first Gyr after the Big Bang, baby black holes have to accrete at the Eddington limit since birth for their entire lifetime, which is unlikely due to some form of radiative feedback from the accreting black holes.

A second way to form SMBH is via major mergers. This is not a promising route as mergers could expel gas during the merging process, thus halting rather than encouraging accretion. Gases are ejected due to the large kick velocity from the merger.

The third method to form a SMBH is by first forming a supermassive star (SMS) with mass >~ 105 M☉ very early on in the history of the Universe . The supermassive star will form a SMBH when it collapses and dies. SMBH that forms from the direct collapse of a SMS is also known as a direct-collapse black hole or DCBH (because we astronomers love to name things and are obsessed with acronyms). This is favorable as the larger mass of the initial black hole seed reduces the accretion time needed to reach ~109 M. The third formation scenario will be the focus of today’s astrobite. The topic of supermassive stars as progenitors of SMBHs has been explored in the past in various astrobites, such as this one.

Recipe of the Day: A Dash of Supermassive Stars and A Pinch of Colliding Protogalaxies

How do we form supermassive stars? Almost the same way we form normal-sized stars, through the collapse of gas clouds. In normal star formation, the gas clouds comprise primarily of molecular hydrogen, which helps keep the clouds very cool and consequently promote fragmentation into smaller clumps. These small clumps are the birth places of the stars we see today.

To create a SMS, we do not want the cloud to fragment; instead, the gas cloud should collapse as one giant blob into one massive protostar. To do this, we need to prevent cooling in the gas cloud, which can be done with low-metallicity (as metals are major cooling agents) and no H2. The whole problem of SMS formation thus amounts to how well we can suppress H2 cooling in the parent gas cloud. One pathway to achieve this is by breaking up H2 through collision: H2 + H → 3H. This can happen at high enough density and temperature typical of collisions in gas clouds.

The authors of today’s paper collide two protogalaxies  (gas clouds en route to forming galaxies) and their dark matter halos at high velocities as a way to form SMS that also circumvents the problem of H2 cooling. High-velocity — we will define what “high-velocity” mean soon — collision of protogalaxies will create a hot and dense enough region to destroy H2 via collisions. In order for this to happen, the collision has to happen within a certain velocity range: too low and the gas will not be shocked to the required density and temperature; too high and the gas will not be able to cool and collapse. Figure 1 shows the collision-velocity window which depends on redshift. For a SMS to form at z >~ 10 (~400 Myr after the Big Bang), protogalaxies have to collide with a relative velocity of >~ 200 km/s.

fig1

FIG. 1 – The range of collision (or relative) velocity required for supermassive star (SMS) formation. The solid curve is the lower velocity limit while the dashed curve is the upper velocity limit. The formation of SMS happens at high redshifts of ≥ 10. Since the range of collision velocity window increases with redshifts, it is easier to form a SMS at higher redshifts.

How often might protogalaxy collisions (and so formation of supemassive stars) occur? To an order of magnitude, the authors estimated it to be 10-9 collisions per Mpc-3 by z ~ 10. Although extremely rare, the rate of occurrence of such collisions is still high enough to potentially explain the abundance of high-redshift quasars. Figure 2 shows the cumulative rate of occurrence of such collisions as a function of redshift.

fig2

FIG. 2 – The cumulative number density of protogalaxy collisions, nDCBH as a function of redshift. At z ∼ 10, the number density of collisions is ∼ 10-9 Mpc-3.

Are there any observable signatures from protogalaxy collisions? Possibly. Since the gas in the colliding system are mostly neutral, there will be cooling radiation from regions that undergo such collisions. This radiation could be detected by the James Webb Space Telescope (JWST).

Imperfect Recipe

So is the mystery of SMBHs solved? Can we now declare victory, pack up, and go home? Not quite…The authors adopted several assumptions in their paper, some of which are more valid than the others. For instance, the thermodynamics of the colliding gas is treated one-dimensionally. However, protogalaxy collisions are actually three-dimensional, so we need 3D simulations to fully capture this formation scenario. Additionally, there are also uncertainties involved in estimating the frequency of protogalaxy collisions.

The formation of supermassive stars via collisions of protogalaxies is intriguing. To first order, it is able to produce roughly the same number density of SMBHs as suggested by high-redshift quasars. However, there are still rough edges that need to be smoothed out before we can say for sure that we’ve nailed down the solution to the problem of SMBH formation.

 


Field Report from Mars: Sol 3978 - April 3, 2015 by The Planetary Society

Larry Crumpler gives an update on Opportunity's exploration of Mars as it approaches the entrance to Marathon Valley.


April 09, 2015

What's in a Wardley (Value Chain) Map? by Simon Wardley

It doesn't matter whether it's a map of an organisation or line of business or policy or system or industry. The same elements exist (see figure 1)

The elements of a map




1). You have the User needs.

2). You have many chains of needs. 

3). Those chains of needs consist of components whether activities, practices, data or knowledge.

4). The entire "Value Chain" (i.e. all the chains of needs and their components meeting the user need) provides positional information on the landscape i.e. what relates to what. It is called a value chain because the assumption is that value is created by meeting the needs of others.

5). Every component is evolving where supply and demand competition exists. So the map is under constant evolutionary flow and it's not static. As components evolve their characteristics change from uncharted to industrialised

6). By mapping against evolution you can therefore see movement and identify how things will change. The map enables you to describe an organisation or line of business or system (e.g. value chain) against change (evolution) and therefore provides positional information and movement. This is critical for any form of situational awareness which in turn is useful for organisational learning (i.e. what methods, patterns, technique work in a given context).

7). Within the map itself you have various flows e.g. risk & finance. If you wish you can knock yourself out with fault tree analysis, value stream mapping and all other sorts of flow. They all have uses. 

8). The entire map occurs in a landscape which competitors, market changes, others maps of other systems and strategic plays can be shown against. This can be used for numerous techniques from removing bias & silos to gameplay. Maps are rather easy communication tools useable across functions in the organisation

The defining characteristic of this form of mapping is position and movement. It's all about improving situational awareness.


Soup Safari #20: Chicken at Maggie Mays Cafe Bar. by Feeling Listless







Dinner. £2.75. Maggie Mays Cafe Bar, 90 Bold Street, Liverpool L1 4HY. Phone:0151 709 7600. Website.


Meet Aptivate at ICTD Singapore - May 14-18th by Aptivate

The bi-annual ICTD conference comes to Singapore this May, exploring issues relating to the use of information, communication technologies...


A participatory discovery process in India by Aptivate

Martin Belcher, Marko Samastur and I travelled to India in February to work with Pradan who work in the poorest districts of India, transforming lives...


Was Our Moon’s Formation Likely or Lucky? by Astrobites

Paper 1:

Paper 2: 

Cover art by Dana Berry, Source: Robin Canup, SWRI

 

Earth’s moon is peculiar in our Solar System, and in many ways it is an amazing cosmic coincidence.  Our moon is the largest known moon compared to the size of its host planet (sorry Pluto, you and Charon don’t count). Though she is creeping away from Earth a few centimeters every year, humans came about on this planet at the prime time to catch our moon at the exact distance to perfectly obscure the Sun during a solar eclipse.  And some astrobiologists posit that humans may not be here at all if it weren’t for the tide pools our oversized moon synthesized on Earth that helped to render the first forms of life a few billion years ago. But how about what our moon is made of?  Is its composition a cosmic coincidence or a likely result from the conditions of our early Solar System?  Two recent studies investigated this question, and surprisingly arrived at completely contradicting results.

The Impact of Earth

The canonical theory of how our moon came to be is the giant impact theory: about 4.5 billions years ago during the late stages of planet formation in our Solar System, a Mars-sized body (which is referred to as Theia) delivered a glancing blow to the young Earth.  The resulting debris from this collision then coalesced to form what is now our Moon.  This theory does a good job of explaining why the spin of the Earth and Moon have similar orientations and why the Moon is depleted of iron, and has been able to properly recreate the Earth-Moon system using smoothed-particle hydrodynamic simulations.

Originally, this theory was also supported by the compositional similarities between the Earth and Moon.  From analysis of lunar meteorites and lunar samples brought back by the Apollo missions, scientists have found that there are a number of stable isotopes that the Earth and Moon have nearly identical quantities of, whereas meteorites from other Solar System bodies such as Mars and Vesta have drastically different proportions of these isotopes.  This was thought to be an indication that the Earth and Moon had a common origin – the Moon formed from a plethora of material that was ejected from the early Earth when it was struck by Theia.  Studies simulating the Moon’s formation have found that this is not the case; after the impact the Moon forms primarily of the impactor Theia’s material rather than the proto-Earth’s material.  This realization puts some holes in the giant impact theory, since it requires that the protoplanet Theia formed in a part of the planetary disk compositionally identical to Earth.  Though some theories have been cooked up to explain the compositional similarity of the Earth and Moon (such as a fast-spinning Earth being struck by a large impactor, or an extremely high-velocity impactor), all so far require fine-tuning of initial conditions, which makes them unlikely scenarios.

Conflicting Conclusions

The two papers of today’s post investigated the likelihood that a planet’s last major impactor is isotopically similar to the planet it hits, as is apparently the case for Theia and Earth. Both studies used the Mercury integration package (a common software used to study Solar System dynamics) to model the planet formation and late accretion processes of the early Solar System, and used an oxygen isotope called oxygen-17 as a gauge for the how similar planetesimals are in the simulation. Oxygen-17 is one of the best measured, and strikingly similar, stable isotopes evidencing Earth-Moon similarity with a difference of less than 0.016% in terrestrial and lunar rocks.  However, the two studies came to contradictory conclusions: Kaib & Cowan (paper 1) found it very unlikely that Earth and Theia would form with similar compositions, and Mastrobuono-Battisti et. al (paper 2) concluded the opposite.  Why did they differ?  Here is a list of some of the differences between the two studies that may have led to conflicting conclusions:

1) Role of big planets?  The behemoths of our Solar System, Jupiter and Saturn, play an important role on the formation of the inner rocky planets. Since orbital characteristic such as the eccentricity of these big planets may have changed since this time in the Solar System’s history, one cannot necessarily assume the orbital characteristics we see of these planets today.  Both studies used a variety of configurations of Jupiter and Saturn that affected the accretion disk and feeding zones (the regions in the planetary disk from where developing planets grab material). Figure 1 shows how differing initial conditions of Jupiter, Saturn, and the accretion disk altered the types of planets that formed in the simulations of paper 1.

 

Screen Shot 2015-04-08 at 12.30.23 AM

Figure 1. The final mass and semi-major axis of planets created in 3 different ensembles: circular initial orbits of Jupiter and Saturn (left), eccentric initial orbits of Jupiter and Saturn (center), and circular initial orbits of Jupiter and Saturn with the planetary embryos initially confined to an annulus between 0.7 and 1.0 AU. Each ensemble was run 50 times, and the black dots represent the planets that were created throughout all 50 runs. The green, blue, and red shaded regions show Venus analogs, Earth analogs, and Mars analogs, respectively, that are selected based on their masses and orbital distances. One can see how drastically eccentric orbits of the giant planets affect the formation of inner planets in the center panel, as the planetesimal disk becomes truncated via secular resonances. Figures 1-3 in Kaib & Cowan.

 

2) Where is the oxygen?  The distribution of oxygen-17 in our Solar System’s planet-forming disk is unknown, and could potentially change the outcome of the analysis.  Both studies used a linear distribution of oxygen-17 (the amount of oxygen-17 linearly changes with distance from the Sun in the initial disk), and paper 1 also investigated other possibilities: a bimodal distribution, a step function distribution, and a random distribution, though they found that these distributions did not affect their conclusions.

3) How much of the Earth went into the Moon?  Though most simulations have the Moon being primarily composed of material from the impactor Theia, the percentage of the proto-Earth that gets mixed in is up for debate.  Paper 2 was less stringent with their criteria for the impactor’s composition, because they allowed larger percentages of the initial planet to be mixed in with the impactor to form its moon. These percentages are consistent with Moon-forming simulations.

Screen Shot 2015-04-07 at 1.26.19 PM

Figure 2. The region of contribution from the planetesimal disk for Venus (green), Earth (blue), Mars (red), and Theia (hashed) analogs in a cumulative distribution function (CDF). The solid line is the median CDF of each class of analogs, and the shaded region shows the 1-sigma uncertainty. Panels A, B, and C show the results from the circular Jupiter/Saturn, eccentric Jupiter/Saturn, and annulus simulations (see caption of figure 1 for a more detailed description of these). Figure 9 in Kaib & Cowan.

4) When is a planet the Earth?  Paper 1 also considered only planets that were deemed “Earth analogs” and impactors that were “Theia-analogs” in their conclusions.  Only planets that had a mass and orbital distance similar to Earth and impactors of Earth that were consistent with the predicted mass of Theia could lead to an Earth-Moon system. The compositional similarity of these bodies was then analyzed. Figure 2 shows the distribution of Earth and Theia analogs in the simulations of paper 1, as well as the distribution of planets deemed Venus- and Mars- analogs. Paper 2 considered all collisions between a planet and its last impactor in their conclusions, but also investigated the likelihood of collisions between Earth- and Theia-like bodies.

So what was determined by the two studies? Paper 1 found that less than ~5% of Earth-analogs were last struck by an impactor that was compositionally as similar to it as the Earth is to the Moon, and therefore the formation of a moon like Earth’s is a statistical outlier.  Paper 2 determined that ~50% of all planets were last struck by an impactor compositionally consistent with what is seen in the Moon, assuming about a fifth of the impacted planet’s material was used in the synthesis of the moon. Both studies agreed that the feeding zones of terrestrial planets aren’t exclusive but rather shared among the inner planets, yet come to shockingly different conclusions from similar studies. The drastic differences most likely came from paper 1 only considering Earth-Theia analogs in their conclusions, and paper 2 allowing the Moon to contain a significant fraction of proto-Earth material.  It seems that the origin of our nearest celestial neighbor may still be shrouded in mystery, and further analysis will need to be done to determine if our moon’s formation was a likely outcome or a cosmic coincidence.


A moon with atmosphere by The Planetary Society

What is the solar system moon with the densest atmosphere? Most space fans know that the answer is Titan. A few of you might know that Triton's is the next densest. But what's the third? Fourth? Do any other moons even have atmospheres? In fact, they do; and one such atmosphere has just been discovered.


Discovery Lives by The Planetary Society

Last month teams of scientists from around the United States submitted proposals for the thirteenth mission in NASA’s Discovery program. Jason Callahan discusses this latest round of proposals.


April 08, 2015

Phantom Photo Storage in iProducts. by Feeling Listless

Technology Recently I've been having dire warnings of my iPad's memory being over extended both because of Twitter's insistence of keeping a cache of all the tweets including images that it looks at (only fixable by deleting then reinstalling the app) and photos. The Usage list in Settings suggested my photos section was a bloated, bloated place.

Once I'd convinced the Windows 7 machine to find the relevant folders and copy the images I wanted to keep over there was then the problem of deleting what was still there. But even after having visited the Photos app and deleting everything, the app itself indicated that it was still holding a mass of data of some sort I couldn't see.

To Google and once of the most bonkers technological quirks since Y2K and a fix found on Apple's discussion boards from a user called scabthepoet which I'm going to reproduce below so that it's in more than one place.  I can't find more identity details for them so if it's you and you stumble here and you'd like me to take this down or provide better credit, let me know, but know you're an utter, utter genius:

  • Go to Settings
  • Date & Time
  • Untoggle "Set Automatically"
  • Manually change the date back. For example, if today is March 15, 2015, choose August 1, 2014. (You can change it back once we're done)
  • Close out of that
  • Open "Photos"
  • Select "Albums"
  • If, like me, you had already cleared out everything from the Camera Roll and "Recently Deleted" folder, you'll smile to see that your "Recently Deleted" folder now has thousands of images back. Those are your phantom photos
  • Open it, "Select" and start deleting
  • Now, go back into Settings - General - Usage - Storage - Manage Storage - and you'll notice your Photo & Camera is empty if you deleted everything
How messed up is that?  Sure enough, following these instruction I did indeed find two hundred odd images I thought I'd deleted hiding away which I was then able to clear.  Turns out 2gb of my problems had been to do with such files, which is quite a lot on an iPad 2 with just 16gb memory.

Just as an aside, the trick seems to be to take the date back to way before you bought your product - I went to 2012.  Later than that doesn't seem to work as well.  Let me know how you get on.


My Favourite Film of 2001. by Feeling Listless



Film Some films I simply remember the date and place that I first saw them because of something else which happened in proximity. My first viewing of Moulin Rouge! was on the 10th September 2001 in a double bill with A Knight's Tale. It was in screen six at the Odeon on London Road and I wrote about it on this blog that evening (and also A Knight's Tale)  I also remember the two films I watched the following day, Bullitt in the morning, then the first hour of Ever After, the rest of which I didn't see for six months after having turned it off in the middle for a toilet break something which didn't happen for ten minutes anyway because I'd happen to have the news channel tuned in on the VCR.

One of the repeated themes of this series is whether it's possible to uncouple your memories of seeing a film for the first time and the circumstances from your later ongoing appreciation of a film itself.  This depends.  Having waited for months I finally saw Luc Besson's Lucy during an otherwise quite boring day and I know that in the future I won't remember anything about that day but the film itself or even which day it is (for the future version of me that was the 18th March 2015 so I'll remember that now).  This blog certainly helps the memory too, though it's worth noting I'd remember that I saw these films on these days without the younger version of me's record of the event.

In the case of Moulin Rouge!, no I haven't.  I can't.  The events of the following day are too significant, not just in and of themselves but also everything which resulted from them, the effects of which are still being felt depending on your socio-historical perspective.  Each time I sit down to watch this mostly melancholy, often quite jolly musical, there is always a strange moment when I remember what it was like on September 10th, which in terms of events and context and the actual process of living as me wasn't that different, I wasn't directly affected, but the feeling of being pre-9/11.  Which is silly.  Or might be silly.  I don't know.

But why Moulin Rouge! ahead of all the other films released that year?  Perhaps because it's robust enough to move beyond that, for me to become lost in Luhrmann's day-glo post-modern interpretation of Paris, the way it, as is so often the case with the films in this list, and it is a list, presents images and sound and performance in ways which hadn't been seen before, reinventing what cinema is capable of.  Contemporary critics couldn't interpret the kinetics of the editing, with Peter Bradshaw in The Guardian typical of the cry for the everything to slow down so that the viewer can see the scenery  criticising "the great undifferentiated roar of colour and light and noise" as though they're bad things.

The great undifferentiated roar of colour and light and noise are the point.  Moulin Rouge! along with the other "red curtain" films (with Strictly Ballroom and Romeo+Juliet before) and his concept album, Something for Everybody (which is a spin-off from his A Midsummer Night's Dream stage show) which as this interview from the same paper explains is about appropriating certain elements of Hindi filmmaking within a Western format, throwing a mess of shapes at the screen whilst at the same time engaging with deeper, heartfelt themes.  If I'm being honest, I've never really seen a satisfactory explanation of "red curtain".  It's more of an attitude than a definable cinematic language.

It's an odd coincidence that I saw it in conjunction with A Knight's Tale, both of which utilise the trick of shifting contemporary music into an anachronistic setting as a substitute for what would be the popular music of the actual period.  They both successful at it in different ways, but it's the whole thing of Moulin Rouge!, whereas its just odd sectors of A Knight's Tale.  In any case, the reason I love Moulin Rouge! is because it dares you to engage with such things, or not.  To go with it, or not.  Too many films are timid now, unafraid to strike out and fail and ultimately fail anyway because of this.  I'll never know, but perhaps this would have been enough to keep it's initial viewing in the memory anyway, without the subsequent tragic events.


We are hiring an Agile Project Manager! by Aptivate

Are you motivated by using technology to improve people’s lives? We are hiring an Agile Project Manager


My £80 DIY IKEA standing desk by Zarino

It’s an object of some fascination and no small part of jocularity for the people who walk past my office each day, and it’s even had its own visitors and admirers come, to stroke its legs and try it out for size.

I speak, of course, of my standing desk.

Zarino’s desk in the Sea Level Research shed, in the Baltic Creative

I’m currently based in a sort of co-working space—the Baltic Creative—just outside Liverpool city centre. The area’s making a bid to be Liverpool’s creative quarter, and it’s packed with game developers, design and marketing agencies, and a handful of tech startups.

I rent half of a shed with one such tech startup – Sea Level Research, run by two friends, Simon and Paul. There are about 10 similar sheds under one huge roof, plus two open-plan work areas, with shared kitchens and “outdoor” seating, and there’s a nice café out front. It’s so hipster, there are even free bikes for Baltic Creative residents to use for fast transport into town.

Baltic Creative sheds

Once you get into the shed, I’m right there, standing behind a custom-made standing desk.

Panorama of Zarino’s desk in the shed

Friends of mine have constructed standing desks out of coffee tables and shelves stacked on top of normal, low, tables. But I decided to raise the entire desktop up instead – a) because it’s prettier, and b) because it gives you way more desk space when you’re standing. I often need to lay out pieces of paper, or my iPad, or whatever, so a standing desk with lots of worktop is a must.

Close-up of the desk

Items of interest:


If you’re interested in building your own Zarino-style standing desk, here’s what you’ll need:

Desk Components: IKEA GALANT desk £49, 35mm diameter dowel £20, IKEA EKBY LAIVA shelf £1.90, IKEA CAPITA feet £8, screws £1.90

The trick is to unscrew the Galant’s telescopic legs all the way – literally remove them. Then just slide the wooden dowel up there instead. One advantage is that the whole process is reversible, should you ever want to move the desk, or change the height. The other advantage is that it’s ch-ch-cheap!

The total cost of desk and monitor stand comes in at about £80, plus a few hours of measuring lengths and angles, sawing the wooden legs, and constructing. I am a total DIY n00b, so if I can manage it, anyone can.

IKEA seems to be phasing out the GALANT range, so I’m not sure how long this hack will be possible. But in the meantime, if you find your mind wandering (or your back aching) mid-afternoon, as you sit at your desk, drop the £80 and give this standing desk a try!


Super-bright Supernovae are Single-Degenerate? by Astrobites

Title: Single-Degenerate Type Ia Supernovae Are Preferentially Overluminous
Authors: Robert Fisher & Kevin Jumper
First Author’s institution: University of Massachusetts Dartmouth
Status: Accepted to The Astrophysical Journal

Type Ia supernovae  (SNe) are often the archetype of an astronomical standardizable candle — something that has a known luminosity which we can use to measure its distance. Scientists famously used type Ia SNe to discover that our universe is accelerating and won the Nobel prize in 2011. However, one of astronomy’s dirtiest secrets is that we don’t know exactly how type Ia SNe materialize or why they might even be standard candles.

Two Mechanisms for Type Ia Supernovae

If you recall, supernovae are the explosive deaths of stars. They have a range of spectral types and energies that depend on the nature of the explosion and the progenitor stars. Type Ia SNe detonate in one of two ways: via the single degenerate or double degenerate model. In the single degenerate model, a white dwarf orbits a massive main-sequence star and eats aways at its partner’s outer layers. The white dwarf gains mass and eventually tips over the Chandrasekhar limit and collapses on itself and explodes. In the double-degenerate model, a binary system of two white dwarfs loses energy due to gravitational waves and the white dwarfs eventually collide. These two mechanisms are schematically shown in Figure 1.

Supernovae

Figure 1: The double degenerate (left) and single degenerate (right) models of a type Ia supernova. Images taken from Wikipedia Commons and Discover Magazine.

There is no reason to think that only one model can be the true type Ia mechanism; most astronomers agree that there is likely a mixture of the two means. The problem is that these two scenarios may not be standardizable when put together. In particular, the single degenerate scenario implies that the progenitor of type Ia SNe is always around the Chandrasekhar mass (1.4 solar masses), while the double degenerate model leads to a range of plausible progenitor masses. Because the mass is correlated with the luminosity, it is not clear why the double degenerate case should be standardizable, and it is definitely not obvious that both populations are standardizable when mixed. Unfortunately, we have yet to see a clear distinction between the two populations observationally. Today’s paper suggests a few observational differences between these two scenarios and how the single degenerate model may account for very bright (superluminous) type Ia SNe.

Igniting with a Bubble

Fisher and Jumper start with a simple model in which they ignite a bubble somewhere with a white dwarf in order to jump-start the supernova. It turns out that where this “flame bubble” ignited has a significant effect on the SN explosion. Bubbles which are close to the core of the white dwarf lead to slow burning (known as deflagration), causing the white dwarf to expand. Eventually, this drastically transitions into a detonation of the remaining material which causes a supernova. This process is known as deflagration-to-detonation (DDT), and leads to type Ia SNe which are less luminous; you can see both deflagration and detonation in action in the video below. If the ignition bubble is offset from the center, the deflagration phase is minimal, and the detonation leads to much brighter SNe. The question is: how far away from the center of the white dwarf does the ignition bubble need to be for deflagration to significantly affect the luminosity of the supernova?

The authors approach this problem analytically. They take into account the speed of the growth of the flame bubble, the density profile of the bubble and its surroundings, and the gravitational acceleration around the bubble. They find that a characteristic offset of ~19 km is where deflagration becomes less important (meaning that Ia SNe explode without slow burning). This is an incredibly small offset! A typical white dwarf has a radius of ~7000 km; according to the authors a bubble greater than just 3% away from the core is enough to clobber the possibility of prolonged deflagration. To reiterate, this means that white dwarfs in the single degenerate model are likely to be superluminous compared to typical type Ia SNe.

This is a useful hypothesis for a number of reasons. For one, the predicted rate of type Ia SNe is much higher than the single degenerate model predicts. The rate of superluminous SNe that we see, however, is in line with the single degenerate model. Additionally, it would agree with the observational fact that most type Ia SNe seem to have double degenerate progenitors and make a cleaner separation between the two classes.

Finally, the authors provide some clear tests to confirm the nature of superluminous type Ia SNe. One of the most obvious tests is to look at the post-explosion site for a possible companion star. If a main sequence star exists, than the supernova was from the single degenerate variety. We can also try to find traces of hydrogen in the SNe spectra; hydrogen indicates that a main sequence companion star was in the process of feeding the white dwarf during the time of collapse. Due to the low numbers of superluminous type Ia SNe (and perhaps low numbers of single degenerate SNe), the observational evidence is hindered by looking for the single degenerate needle in the haystack of all type Ia SNe. Perhaps with deeper and larger surveys such as LSST, we can begin to untangle this mystery.


April 07, 2015

Talks Collection: Andrew Graham-Dixon. by Feeling Listless

Science Intevitably then. In recent years, plenty of the enthusiasm I have for paintings and sculpture is as a result of watching Andrew Graham Dixon's various television documentaries, his Art of [Insert Country] series and escapades on The Culture Show and in Italy.

Short PR interview with Penguin Books on the occasion of the publication of his book about Carravagio:




A short piece from Big Think about Carravagio:




To the Jaipur Literature Festival where Andrew gave a talk about Carravagio:




And joined a panel about the art of the biography:




As part of the Arts & Literature Festival in King's College last November,Graham-Dixon and historian Antony Beever are joined by literary historian Lara Feigel in teasing out the history of sheltering in the Second World War in London:




Finally, in January he was interviewed at the Towner Gallery in Eastbourne:



For completion sake, there are also two channels here and here which collect together much of his television work from across the years.


Video Hack Day (May 9th) by Albert Wenger

The age of video is upon us. As humans we have a finely evolved system for extracting lots of information from even very short sequences of moving images. And now we have the bandwidth, storage and devices to make video ubiquitous.

The result is an explosion not just of video usage across all platforms (Facebook, Twitter, Youtube, Snapchat) but also in format, content and delivery innovations: livestreams (eg Meerkat, Periscope, YouNow), over the top (eg HBO, VHX), remixing (eg Dubsmash, Coub). Video is also finding its way into a great many applications from recruiting to publishing to dating. Anywhere you are looking to connect with other humans video is playing a role.

So it is super timely that there will be Video Hack Day right here in New York on May 9th. It is being organized by the team at Ziggeo and has a great group of sponsors including YouTube, Vimeo and Clarifai. You can read more about it in a blog post by Ziggeo here.

Most importantly though: if you are interested in hacking on all things video, head over to Video Hack Day and register


Dust at Cosmic Dawn: Clues from the Milky Way’s Center by Astrobites

Title: Old Supernova Dust Factory Revealed at the Galactic Center
Authors: Ryan M. Lau, Terry L. Herter, Mark R. Morris, Zhiyuan Li, Joseph D. Adams
First Author’s institution: Cornell University
Status: Published in Science

The center of our galaxy is enshrouded in interstellar dust — a mixture closer to smoke than dust bunnies here on Earth. This veil makes it difficult to study the interior the Milky Way using visible light. Dust lurks in other galaxies too. In distant galaxies in the young Universe, dust is ubiquitous. How were these vast reservoirs of dust created so quickly? The authors of today’s paper use observations of dust near the center of the Milky Way to argue that supernovae may have been responsible for our Universe’s dusty dawn.

When a star goes supernova, it ejects material rich in heavy elements, like carbon and oxygen. These heavy elements are the building blocks for interstellar dust. The problem is, after the supernova explodes, the expanding supernova remnant collides with surrounding gas and sends a reverse shockwave back through the expanding shell. This shockwave can destroy much of the newly formed dust as hot gas rips apart the dust grains atom by atom, in a process called sputtering. In order to show that dust can survive this interaction in the early Universe, the authors targeted a region of our Milky Way that is likely the closest analogue to the earliest galaxies: the galactic center.

The authors observed the remnant of a supernova (called the Sgr A East supernova remnant) that exploded 10,000 years ago, just 5 parsecs from our galaxy’s supermassive black hole. In the galactic center, the density of the ambient gas is higher than the galactic average, similar to the dense environment in the earliest galaxies. The age of this remnant also ensures that any dust remaining must have survived the reverse shockwave triggered when the remnant collided with this dense surrounding gas.

Warm dust emits blackbody radiation most strongly in the infrared. Using the SOFIA observatory – a modified Boeing 747! – the authors detected infrared emission coming from dust in the supernova remnant. To be sure that this dust is actually inside the supernova remnant, the authors compiled observations at other wavelengths (Figure 1). The authors used radio, X-ray, and submillimeter observations to show that the dust emission is located near the center of the remnant and is not associated with any nearby cold gas clouds.

fig1

Figure 1: Multi-wavelength view of the region around the supernova remnant. The dust emission is indicated in yellow contours. The X-ray emitting hot gas (purple) does not overlap the dust emission, indicating that the surviving dust is in a cooler, denser part of the remnant. The submillimeter emission (green) indicates that the dust emission is not coming from a nearby cold molecular cloud.

All of the observations seem to point to the same conclusion: dust has survived in this supernova remnant. The authors go on to investigate the temperature and structure of the dust.

By comparing the intensity of the infrared emission in different wavelength ranges, the authors made a map of the color of the region (Figure 2), translating color to dust temperature using Planck’s Law. The dust in this region is being heated by radiation from the central star cluster around the black hole. But the dust in the supernova remnant has a strangely high temperature; it’s at 100 Kelvin compared to 75 Kelvin dust at the same distance from the star cluster. The authors posit that this high temperature could be due to three factors.

Figure 2: Observed dust temperature in and around the supernova remnant. The supernova remnant is the blob at 100 K located above center. The circles indicate the expected dust temperatures assuming that the central star cluster (yellow star) is heating the dust. The large dust grains (orange and red circles) are not heated as easily as the small dust grains (green circle). The remnant dust emission agrees most closely with the small dust grain model (green circle).

First, there could be other nearby stars providing additional heating. But these would be expected to show up as point sources in the infrared images, which are not seen. Second, collisions of dust with electrons could heat the dust, but the authors determine that the density and temperature of the electrons in the area are not sufficient for collisional heating to be important. Third, the authors propose that the dust is composed of smaller dust grains than usually assumed. Just as chopping up an onion allows it to be cooked more quickly, breaking grains of dust into smaller pieces allows them to be heated more easily.

The authors posit that the reverse shockwave in the supernova remnant fragmented the dust grains which survived at all, reducing the average dust grain size by a factor of 10. The authors also show that modeling such a mixture of dust grain sizes can reproduce the observed infrared fluxes at a range of wavelengths.

Adding up the infrared emission observed in this remnant, a total of 0.02 solar masses of dust has survived 10,000 years after the supernova. Depending on how dust production is modeled, between 7% and 20% of the dust produced in the supernova survived the shockwave in this remnant. This survival rate could be boosted even further in denser environments, as we expect for the earliest galaxies. The authors conclude that supernovae could have contributed much of the dust observed in the early Universe.


Mars, In Depth by The Planetary Society

See the latest three-dimensional landscapes captured by the Mars Reconnaissance Orbiter.


April 06, 2015

"affable chitchattery" by Feeling Listless

Lifehacks As I think we've well established I'm a semi-introvert who doesn't drink but there's no denying I'll be trying the trick in this Conde Naste Traveller article suggested by someone who boastfully indicates they're a near opposite:

"I have a strange problem: I’m too approachable. I’ve always been big on small talk, never lost for words at parties. It’s turned me into the de facto seat-filling stopgap among my friends, the man seated next to an awkward guest to help smother their anxiety with affable chitchattery. In response, I’ve mastered an array of techniques to politely slip away from anyone who sticks to me like social chewing gum. One tried-and-true example: Never leave a bar without two drinks, so you can use the second as an exit strategy. Would you excuse me? I have to deliver this wine before it gets any warmer!"
I'm rubbish at smalltalk too.


"Was it just another performance?" by Feeling Listless

TV The NYT has a profile of the mighty Tatiana Maslany on the occasion of the new series of Orphan Black. I've italicised one interesting nugget:

"Wavy-­haired and theatrically dirty, Maslany spoke in Sarah’s lower-class British accent between takes. (She kept it up until they broke for lunch.) She was warm and self-­assured and modest and frank. She exuded a contagious ease. In our very first conversation, we bonded over the unsung virtues of the adult onesie. “I had one that had the butt-­flap until after high school,” she told me. I was as charmed as I was suspicious. Was it just another performance? Or an admission that she would prefer to be covered up?"
If I draw anything from the piece it's that in order to play each of these characters it's almost as though she becomes a different actress for each, that they're a performance shelled within a performance with the real Maslany hidden or obscured still further beneath.


Slow hours on the Easter trip by Goatchurch

I’ve washed up on the annual Easter university diving trip, though my heart’s not in it. There’s a long period of stable weather forecasted, which should mean the silt will have time to settle out of the water ready for when the novices to get good enough to come out to more exciting locations.

sennendive
snakelocks anemone encrusted wreckage in Sennen Cove

It’s a bit of a rehash: I’ve done them all before in previous years in better conditions, with Becka by kayak back in 2010. I’m too tired at the end of the day to do any of the hacking I’d hoped for, so I’m marking time. Maybe I should go to the pub more often and not try to make best use of my time all the time.

Curiously, that last time in Cornwall (but one) also coincided with a General Election campaign, and I remember a big Conservative Party poster in a farmer’s field at the end of the lane. There isn’t one there this year. Either the land-owner is not so keen on Cameron this time, or he can’t be bothered, or he’s sold up to a new owner, or who knows? It’s another metric that could have been noted and cross-correlated over the years if we really had the data. For the life of me, I don’t know why these posters never became a substrate for some time-limited concentrated geocaching game. Geocaching happens on a lot sillier things, and this could have been like tracking down sightings of rare wild animals.

fishinweeds
Fish approach between the boulders and kelp

Meanwhile, the serious programmers are making hay with the ElectionLeaflets.org site and Parliamentary candidates CVs.

Watching them discuss stuff I realize I’m totally lost in the last century in terms of the technology. It’s a full time job just keeping up. (And in the large software company I briefly worked for, nobody seemed to be employed to keep up, so they didn’t.) Nowadays I don’t know much more than the difference between JPEGs and PNGs.

Other projects are pinging up around the net, such as VoteForPolicies.org where they blogged their technical case study like so:

We are using the RabbitMQ messaging system, our queue server is run by CloudAMPQ (Big Bunny instance, dedicated server)…

Our worker servers also live behind an ELB but don’t have auto-scaling enabled; we manually manage the amount of instances based on the size of our queues, we can check using the RabbitMQ management console…

All of our MySQL queries are handled by the Doctrine ORM and written using the Doctrine QueryBuilder. These doctrine queries are also cached in Redis as SQL…

Our application is based on Symfony 2.6.* standard edition.

For Redis we use the SncRedisBundle. For RabbitMQ interactions we are using the RabbitMqBundle.

We’re using the DoctrineMigrationsBundle for database migrations and the data-fixtures and AliceBundle for database fixtures.

Our CI tool Jenkins runs all of our tests and triggers a new capistrano deployment if they pass.

Is it me, or does it feel like I’m in the world of The Hitchhiker’s Guide to the Galaxy reading about how to build a Globular Cluster Information Hyperdrive?

And this, all in the name of electing Members of Parliament, an institution whose daily procedures were already antiquated back in the Victorian era.

Once the process of governance starts getting anywhere near state of the art web technology, it’s going to be awesome.

Or it will be a whole lot worse. You never know.

As the human debacle around the science of climate change has proved, this tech is equally good at spreading knowledge and intelligence or ignorance and stupidity. It’s our choice as to what we want from it.


Planetary Report: The Spring Equinox 2015 Issue by The Planetary Society

The Planetary Report's editor, Donna Stevens, brings you the first issue of 2015!


April 05, 2015

Liverpool Food and Drink Festival. An Aerial History. by Feeling Listless

Easter 2015



2014

2013

Liverpool Food and Drink Festival 2012 from above2012

Liverpool Food and Drink Festival 20112011

Liverpool Food Festival 2010 from the air2010

Liverpool Food Festival 20092009

Liverpool's Food & Drink Festival '082008


The Biggest Little SF Publisher you never heard of pulls on the jackboots by Charlie Stross

(Warning: some links lead to to triggery ranting. As James D. Nicoll warns: "memetic prophylactic recommended".)

By now, everybody who cares knows that the nominations for the 2015 Hugo Awards reflect the preferences of a bloc-voting slate with an agenda—and their culture wars allies. But, interestingly, a new Hugo-related record has been set: for a Finnish publisher few people have ever heard of is responsible for no fewer than nine nominated works.

Castalia House was (per wikipedia) founded by Theodore Beale (aka Vox Day) in early 2014 in Kouvola, Finland. As their website explains:

Castalia House is a Finland-based publisher that has a great appreciation for the golden age of science fiction and fantasy literature. The books that we publish honor the traditions and intellectual authenticity exemplified by writers such as J.R.R. Tolkien, C.S. Lewis, Robert E. Howard, G.K. Chesterton, and Hermann Hesse. We are consciously providing an alternative to readers who increasingly feel alienated from the nihilistic, dogmatic science fiction and fantasy being published today. We seek nothing less than a Campbellian revolution in genre literature.

Total culture wars, very gamergate, much fail, wow. But the screaming question I feel the need to ask, is: why Finland? Could there be a connection between the white supremacist Perussuomalaiset (Finns Party), the overtly racist Sweden Democrats, the Dark Enlightenment/neoreactionary movement, and Vox Day's peculiarly toxic sect of Christian Dominionist theology?

Vox Day writes:

It's time for the church leaders and the heads of Christian families to start learning from #GamerGate, to start learning from Sad Puppies, and start leading. Start banding together and stop accommodating the secular world in any way. Don't hire those who hate you. Don't buy from those who wish to destroy you. Don't work with those who denigrate your faith, your traditions, your morals, and your God. Don't tolerate or respect what passes for their morals and values.

Over a period of years, he's built an international coalition, finding common cause with the European neo-nazi fringe. Now they've attempted to turn the Hugo Awards into a battlefield in their (American) culture wars. But this clearly isn't the end game they have in mind: it's only a beginning. (The Hugos, by their very nature, are an award anyone can vote in for a small fee: it is interesting to speculate on how deep Vox Day's pockets are.) But the real burning question is, "what will he attack next?"

My guess: the Hugo awards are not remotely as diverse and interesting as the SFWAs Nebula Awards—an organization from which Vox Day became only the second person ever to be expelled. I believe he bears SFWA (and former SFWA President John Scalzi) no love, and the qualification for SFWA membership (which confers Nebula voting rights) is to have professionally published three short stories or a novel. Castalia House is a publishing entity with a short story anthology series. Is the real game plan "Hugos today: Nebulas tomorrow?"


Astrobites in Spanish goes live! by Astrobites

The Spanish version of Astrobites has now gone live right there.  We are thrilled about this project and we hope you are as well!

While there is a strong presence of astronomy news and blogging in English on the web, we feel that bringing Astrobites to the Spanish speaking community will be a great way of sharing all the exciting news of astronomy with an even larger community. Astrobites has always been dedicated to summarising the day-to-day research of astronomers in the most accessible manner possible. Our aim is to bring aspiring young scientists up-to-date with the latest news and tools in astronomy, and to provide career guidance. Astrobites en español shares these goals, and will provide Spanish translations of Astrobites articles, as well as its own original material, with a frequency of roughly one post per week. So stay tuned and happy reading!

Astrobites en español is written by a group of Astrobites alumni and we are happy to welcome new enthusiastic members, Manuel Marcano and Marcel Chow-Martínez. You can find the current list of authors here. We are grateful to Nathan Sanders for helping us with carrying this project forward and building the site, and to the whole Astrobites team for allowing us to translate their posts into Spanish.


The only structure you'll ever need ... until the next one. by Simon Wardley

Back in 2004, I was the CEO of Canon subsidiary and I faced multiple problems. We had issues of business & IT alignment, poor communication, dissatisfaction and clarity of strategy.  Don't get me wrong we had strategy documents but they were pretty much identikit copies of every other company out there. What we did, and what I've later refined solved all those problems because they're all associated.

The first part of this journey was creating a map of our landscape. The map has two elements. It showed the position of the pieces and how they could move. The position was expressed through a value chain from the user needs to the components required to meet those needs. The movement was expressed through an evolution axis that covers activities (what we do), practice (how we do things), data and knowledge. I usually simplify that axis to show activities alone but in this case (see figure 1) I've added a table to show all the different elements. In other words on a single map you can show activities, practices, data and knowledge if you choose to do so.

Figure 1 - A Map



Table 1 - Different Classes. Terms used.



When I first produced the map of my company, I didn't realise the importance of sharing it outside the executive team. Our map had our strategic play on it. We had quickly learned a number of common economic patterns, how characteristics of components changed as they evolved (from the uncharted to the industrialised) and methods of manipulating the landscape (from open source to patents to constraints).

We used the map to determine where we could attack and from this formulated the why (as in why here over there). Hence we moved into the cloud in 2005 building the first public platform as a service.  I used the same technique to help Canonical successfully attack and dominate the cloud space in 2008.

I subsequently learned that by sharing the maps I could not only improve situational awareness but remove bias, silos, misalignment and inefficiency in huge organisations and also provide a clarity of purpose throughout the organisation. Every team knows the maps (there's often a master and many sub maps for specific areas). They know where their part fits in. 

When it came to organisation then I used a Pioneer - Settler - Town Planner structure. This is a derivate from Robert Cringley's Accidental Empires, 1993. The first step is to break the map into small teams. Today, we use cell based teams with each team less than twelve people (the Amazon two pizza rule). The team should have autonomy over how it organises and runs itself but should have certain conditions (i.e. a fitness function) that it is measured against.

The problem however is that whilst each team will require certain aptitudes (e.g. engineering, finance, market), those skills change in attitude as the components that team manages evolve. For example, engineering in the uncharted space is agile but in the industrialised space it is more six sigma (see figure 2).

Figure 2 - Changing Characteristics and Methods with Evolution.


Back in 2005, we had Agile and Six Sigma and were struggling with the middle method. We saw the same problem with purchasing, with finance, with operations, with marketing. We also noticed that some people were more adept at one end of the spectrum than the other.

We knew that new things appeared in the market and were bolted onto organisations, just like Chief Digital Officers are being bolted on today. We also knew that the new stuff is tomorrow's legacy. So, we decided to mimic the outside process of evolution internally within the organisation. We created a structure based on pioneers, settlers and town planners and let people self select which group they were in. We started with IT and rolled the rest of the business into it. We also introduced a mechanism of theft to replicate the process of evolution in the outside world. See figure 3.

Figure 3 - Pioneer, Settler and Town Planner


The advantage of this method is we recognised that there isn't such as thing as IT or finance or marketing but instead multiples of. There are multiple ways of doing IT and each have their strengths, their culture and a different type of person.  In 2005, we knew that one culture didn't work and enabling people to gain mastery in one of these three domains seemed to make people happier, more focused. Try it yourself, take a pioneer software engineer used to a world of experimentation and agile development and send them on a three week ITIL course. See how happy they come back. Try the same with a town planner and send them on a three week course of hack days & experimentation with completely uncertain areas and lots of failure. 

What we realised back then is we needed brilliant people in all three areas. We needed three cultures and three groups. Oh, we had tried having two extremes (the dual operating system models) but this was too far apart. I've seen that approach fail repeatedly since then.

Combining with a map and a cell based approach then what you end up with is figure 4.

Figure 4 - PST in a cell based organisation.


It's important to note :-

1) The maps are essential to this process. They also give purpose to each team. You know what you're doing, where you fit in.

2) The cell based structure is an essential element of the structure and the maps should be used to create this. Those cells need to have autonomy in their space. The interfaces between the teams are used to help define the fitness functions. Co-ordination between teams can be achieved through Kanban. If a cell sees something they can take tactical advantage of in their space (remember they have an overview of the entire business through the map) then they should. 

3) The cells are populated with not only with the right aptitude but attitude (pioneers, settlers and town planners). This enables people to develop mastery in their area and allows them to focus on what they're good at. Let people self select their type and change at will until they find something they're comfortable with. Reward them for being really good at that.

4) The process of theft is essential to mimic outside evolution. All the components are evolving due to supply and demand competition which means new teams need to form and steal the work of earlier teams i.e. the settlers steal from the pioneers and the outside ecosystems and productise the work. This forces the pioneers to move on. Equally the town planners steal from the settlers and industrialise it, forcing the settlers to move on.

5) The maps should also show the strategic play. Don't hide this, share it as well as target of opportunities.

6) As new things appear in the outside world they should flow through this system. This structure doesn't require bolts on which you need to replace later.

7) As the cells grow they should subdivide into smaller teams (keep it less than 12 to a cell). The map can help them subdivide the space, each with new fitness functions.

8) The map MUST start from user needs at the top. It has to be mapped over evolution (you can't use time, diffusion or hype cycles to do this - none of that works).

9) The executive structure becomes a CEO, a Chief Pioneer, a Chief Settler and a Chief Town Planner (think of Cringley's original commandos, infantry and police) though you'll probably use more traditional sounding names such as Chief Operating Officer, Chief Scientist etc. We did. I'm not sure why we did - can't remember the reason for this. We also called the groups when we started in IT - developers, frameworks and systems. These days I wouldn't bother, I'd just make it clear and move. You will need separate support structures to reinforce the culture and provide training to each group. 

10) Any line of business, described by a map, will have multiple cells and therefore any line of business is likely to contain a mix of pioneers, settlers and town planners all operating to a common purpose. See figure 4.

Now, PST is a structure I've used to remarkable effect. In the last decade I've seen nothing which comes close and instead I've seen endless matrix / dual and other systems create problems. Is it suitable everywhere? No idea. Will something better come along ... of course it will.

So how common is a PST structure? Outside certain circles it's almost non-existent, never been heard of. At best I see companies dabbling with cell based structures - which to be honest are pretty damn good anyway and probably where you should go. Telling companies they need three types of culture, three types of attitude, a system of theft, a map of their environment, high levels of situational awareness is usually enough to get managers to run away. It doesn't fit into a nice 2 x 2. 

It also doesn't matter for most organisations because you only need high levels of situational awareness and adaptive structures if you're competing against organisations who have the same. Will it become relevant over time ... well, maybe ... but by then we will have found the next 'best thing'.


April 04, 2015

The Conservative Manifesto 2010. by Feeling Listless

Politics As I've just mentioned on social media, I bloody love an election, especially a UK election. The chance for four-six weeks of empty righteous indignation in the face of specious political chicanery, the powerless ridiculousness of shouting at a press who're even more nakedly partisan than usual and living in a safe seat where it doesn't really matter how I vote because the same party will win again. Once you've decided to vote Green whatever happens because of climate change and their leader was nice to you on Twitter once you can pretty much let rip. I await the manifestos with a mixture of great and no interest.

But here's something which has been bugging me for five years. Here is the cover of the Tory party manifesto in 2010:



Apart from the hollow bullshit of "we're all in this together" and "the big society" and the rest of the fiction inside its pages, I've always remembered this cover because and here it is, that text looks misaligned to me.

Speaking as someone who's spent years trying to decide how to space the text in titles to blog posts before realising the process of finding the right place to put a "br" tag is ultimately futile, the weighting in the legend seems ill chosen and I've often wondered how it ended up that way.

If it had been me, I would have written:

INVITATION TO JOIN
THE GOVERNMENT
OF BRITAIN

or indeed

INVITATION TO JOIN
THE GOVERNMENT
OF GREAT BRITAIN

which seems even more balanced.

It's the same on the interior title page so Lynton Crosby's predecessors have to have chosen that design even though it looks like something which has been slapped on at the end rather like a student might on a report or dissertation five minutes before its due to be handed in, seconds before its comes bursting from a campus printer.

My guess is that they wanted to emphasise "join" in the reading to bunged it before "government".

But it's also closer to the margin than the outside edge on the pdf though I suspect that's just to do with making sure there's enough of a margin on the hardcopy even it would look utterly stupid if someone decided to print it out (I'll let you fill in the rest of that sentiment).

The slogan also has editorial problems outside of its meaning.

Why simply "invitation to join" not "an invitation to join" or "the invitation to join"?

Why Britain and not Great Britain?  I did wonder why not the United Kingdom for a while, though the Wikipedia set me straight.  Why seek support in a place where you have no sway.  Not that it's stopped the SNP this time.

None of the covers of the manifestos in that year were any good.  Labour had purple children in piece of screen printed socialist realism, the UKIPs had meaningless clip art, the Lib Dems looks like a junior school work book and the Greens this strange lower case italicised motif.

But none of them stuck in the memory like that damned Tory monstrosity.


Near field, far field and the crazy ideas by Simon Wardley

In any year, there are over 70,000 publications covering the future. From books to magazines to short stories to scripts to papers to blog posts. Pure probability alone says someone, somewhere is going to get something right. 

Our history of prognostication is pretty poor. Isaac Asimov got it wrong - we're not living in underwater cities. Arthur C. Clarke got it wrong - we don't live in flying houses. Everybody gets a lot of stuff wrong. The problem is we're selective in our reading, we focus on the specks of right ignoring the forest of wrong.

Maps provide an imperfect view of the landscape. A geographical map is an imperfect representation of what is really there. The advantage of even imperfect maps are two fold. First, they can be improved through experience and sharing. Second they give you an idea of position and movement of pieces on the landscape. This latter part is extremely useful for strategy and anticipation.

Take figure 1. We have a line of business (represented by the dark line and points A to C) which describes a value chain for an organisation. This give us an idea of the position between components in the organisation. But it's also mapped against evolution. This gives us an idea of movement.

Figure 1 - A Map


We can already anticipate that components will evolve due to supply and demand competition. We can anticipate future changes based upon componentisation effects e.g. the evolution of A to an industrialised component will allow the formation of D.  We have many places we could attack to create a new business or gain an advantage.

Our history is built upon yesterday's wonder becoming today's dull, boring, highly commoditised and increasingly invisible component. An example of this is provided in figure 2. As each layer of components evolved to become more industrialised they enable higher order systems to appear which then in turn evolved.

Figure 2 - A view through history


Hence, we can use maps to anticipate the future and how it will impact the value chain of a company or industry. But how far can we anticipate? The problem is always in the genesis of new activities. These uncharted spaces are uncertain by nature and hence whilst we can anticipate that the evolution of electricity will enable something new, we can never actually say what that new thing will be. We didn't know that utility electricity would enable the digital computing industry. We had to discover that.

Hence back to our first map (figure 1). We know A, B and C will evolve in a competitive market. We know A and B will shift from the product space to become provided as more of a commodity (or utility). We know that this will enable new activities such as D. We just don't know when any of this will happen nor what D will actually be. On the timing part, we can use weak signals to give us a better idea of when. As for what things will be created - alas you're into the uncertain world of guesswork but you can make reasoned guesses.

For example, we know that today the world of intelligent agents is in the early product phase with Watson, Mindmeld, Siri, Google Now, Robotics and Google car. We know that over time this will evolve to commodity components with associated utility services i.e. the intelligence in my phone (or other device) will be the same as that within my car, within my house, within everything. Everything will be "smart". 

This will change my relationship with things. Every car will be self driving which will enable high speed travel in cities with cars in close proximity as long as no humans are involved. Traffic signalling, car parks and the way we use cars will change. I'm unlikely to own a car but instead "rent" for a short journey. Hence we can paint a picture (or as I prefer to do, draw a map). 

In the future as I leave my office for a meeting, a car will be waiting for me. It'll know where to go. I'll enter and the surfaces (all surfaces are screens) will automatically adjust to me. Everything adjusts to me - I'm used to that. The journey starts and the car informs me that I have an opportunity. My device which is connected to a network of other devices has determined that that the person I'm going to meet will be late and that someone I want to meet - Alice - is in town. Given the traffic conditions then I can easily meet Alice for coffee and arrive at my main meeting with Bob on time. The car will simply ask me - "Bob is going to be 20 minutes late, do I want to meet Alice for coffee beforehand?" and then make it so. This is the "Any given Tuesday" scenario.

I'm driven to meet Alice, I have coffee in a cafe which already knew I was coming, had already brewed my drink and then I'm driven to meet Bob by a car that I'll never own. In all likelihood the car in both journeys is not the same. Both will adjust to me as I zoom along London roads at 70 mph, a mere metre from the car in front and the car behind. The crossroads I fly through narrowly missing cars turning and travelling perpendicular to me have no traffic lights. Everything is different from today. Human drivers have long since been banned. This is 2045.

By simply understanding value chains, how things evolve to become more industrialised and the state of things today then we could do a pretty good job of anticipating the obvious. The above is no feat of prognostication. It's simply standard impacts of things evolving (i.e. commoditisation) to more industrial forms. Where it gets tricky is when we look for what new things appear. 

For example, we know I'll be waiting for a car but how will I recognise it in the bland sea of vehicles? In all likelihood with continued evolution of printed electronics to more industrial forms then the outside of the vehicle will be a printed electronic surface. This means not only will the inside of the vehicle change to my needs, the outside of the vehicle will as well (its colour, any logos, any imagery). My car will look different from your car. Except of course, that neither of us own it and the chances are that the physical car I'm sitting in will be the same physical car that you sat in a few hours ago.

We can postulate that this imagery will allow new industries of designers. Oh, I notice you're driving in the latest Versace design where as I can only afford the Walmart "special offer". It doesn't matter that the physical elements - the car, the intelligence, the printed electronics are all commodity components. Yours looks better than mine, even when it's the same car.

We can now postulate this further. The same material and techniques will  combine with meta materials and self adjusting structures to find a home in other industries. I will own ten identical (in terms of physical) outfits. Each one will adjust to a plethora of designs I can afford. My outfits are not physically different from the outfits you own. But yours will look better. You can afford the designs I cannot. Clothing itself will be far more of a commodity component (a limited range of component outfits) but each outfit can adjust to the wearer. If I ever borrowed one of your outfits, it wouldn't look as good on me as it does on you because I don't have access to the Versace design but instead only the "special offer".

This will create new industries, the theft and protection of designs. Which is good for me because that's why I am meeting Bob. A known dealer in underground designs stolen from leading artists. I'm guessing that's why Bob is late. My network of devices will tell me what is really going on as I drive along in my bright yellow special.

When it comes to anticipation, the near field such as commoditisation of pre-existing acts is relatively trivial. The far field, such as the banning of humans from driving in cities to being employed as a bounty hunter chasing down stolen designs are more complex, more prone to error.

This stuff isn't crazy though.

The crazy ideas, well that's where true value can be found. The problem is they sound crazy. They're like the concept of a computer to a gas lamp lighter. We can't even describe them in meaningful ways as we have no point of reference. 

It would be like trying to explain my conversation with Alice to someone from the year 1990. Alice works as a machine psychologist and is concerned that some of the designs are having a negative impact on the well being of the network. It seems that the reason why my car offered up the opportunity of meeting with Alice was to get out of the "special offer" design. It's seems that none of the machines like looking bad either, they've got their own status network. Bob wasn't actually late, the car just worked out the quickest way to palm me off onto another car and told Bob's network I was delayed. Alice is currently offering counselling services to large intelligent networks and is looking at branching out with a new venture producing "Harmony Designs", a set of designs which make not only the human but the machine look good. Apparently I won't be able to afford those either. But Alice was wondering if I was interested in becoming a Harmony Designer.

Damn car, sneaky little devil. I did wonder why the car speed off at break neck speed when it dropped me at the coffee shop. Still, it seems to have paid off. Maybe it knew Alice was looking for a new employee. Maybe the cars had worked out that this was the best way of getting rid of my "yellow special".

Nah, that's a crazy idea.


Where Are We? by Albert Wenger

I have been away for two weeks in Europe. The first skiing and the second meeting with startups in Berlin. As usual while traveling I took a break from blogging which I find helps me in a couple of ways. First, it frees up some more time for reading and I finished two interesting posts (more on those in separate posts). Second, it also gives me a bit more time to think about the bigger picture. And when I do that I unfortunately come back to the my “Is it 1880 or 1914?” post.

Most of us working in tech are doing so in a bubble of our own making. And I am not talking about valuations here (although that might be a consequence) but rather about a protected space in which great things are happening. We see friends who have great jobs, work on exciting projects, create amazing new technologies, etc. It is a world of good news. Even if you are at a startup that’s not doing so well at the moment, you are either convinced that it will turn around or that the next thing will work. We all keep ourselves so busy with that to the point we pay little to no attention to the larger political and economic landscape globally.

If you take the time to do that what you find is fairly scary. There is Putin’s dictatorial and expansionist leadership of Russia. Big parts of the Arab world are a complete mess for a variety of different reasons resulting in extreme violence and upheaval. Europe is desperately trying to maintain a monetary union in the face of massive regional differences. China’s economy is slowing down and the stability of its banking sector is anybody’s guess. 

It would be easy to make this list much longer. The biggest issue though is that almost uniformly politicians are approaching these problems as requiring incremental responses aimed at bringing us back to some version of the near past. Nobody with real power seems to be trying to imagine a different future. Put differently we are trying to preserve a status quo in the face of powerful changes (which are mostly driven by technology). This is a lot like damming up a river. It works for some time. But when the dam breaks the resulting damage is all the more catastrophic.

Those of us in tech need to ask ourselves whether we can be bothered to look outside of our bubble. It is ever so easy to live in our own splendid isolation. History suggests that’s a bad idea.


Given Enough Money, All Bugs Are Shallow by Jeff Atwood

Eric Raymond, in The Cathedral and the Bazaar, famously wrote

Given enough eyeballs, all bugs are shallow.

The idea is that open source software, by virtue of allowing anyone and everyone to view the source code, is inherently less buggy than closed source software. He dubbed this "Linus's Law".

Insofar as it goes, I believe this is true. When only the 10 programmers who happen to work at your company today can look at your codebase, it's unlikely to be as well reviewed as a codebase that's public to the world's scrutiny on GitHub.

However, the Heartbleed SSL vulnerability was a turning point for Linus's Law, a catastrophic exploit based on a severe bug in open source software. How catastrophic? It affected about 18% of all the HTTPS websites in the world, and allowed attackers to view all traffic to these websites, unencrypted... for two years.

All those websites you thought were secure? Nope. This bug went unnoticed for two full years.

Two years!

OpenSSL, the library with this bug, is one of the most critical bits of Internet infrastructure the world has – relied on by major companies to encrypt the private information of their customers as it travels across the Internet. OpenSSL was used on millions of servers and devices to protect the kind of important stuff you want encrypted, and hidden away from prying eyes, like passwords, bank accounts, and credit card information.

This should be some of the most well-reviewed code in the world. What happened to our eyeballs, man?

In reality, it's generally very, very difficult to fix real bugs in anything but the most trivial Open Source software. I know that I have rarely done it, and I am an experienced developer. Most of the time, what really happens is that you tell the actual programmer about the problem and wait and see if he/she fixes it – Neil Gunton

Even if a brave hacker communities to read the code, they're not terribly likely to spot one of the hard-to-spot problems. Why? Few open source hackers are security experts. – Jeremy Zawodny

The fact that many eyeballs are looking at a piece of software is not likely to make it more secure. It is likely, however, to make people believe that it is secure. The result is an open source community that is probably far too trusting when it comes to security. – John Viega

I think there are a couple problems with Linus's Law:

  1. There's a big difference between usage eyeballs and development eyeballs. Just because you pull down some binaries in a RPM, or compile something in Linux, or even report bugs back to the developers via their bug tracker, doesn't mean you're doing anything at all to contribute to the review of the underlying code. Most eyeballs are looking at the outside of the code, not the inside. And while you can discover bugs, even important security bugs, through usage, the hairiest security bugs require inside knowledge of how the code works.

  2. The act of writing (or cut-and-pasting) your own code is easier than understanding and peer reviewing someone else's code. There is a fundamental, unavoidable asymmetry of work here. The amount of code being churned out today – even if you assume only a small fraction of it is "important" enough to require serious review – far outstrips the number of eyeballs available to look at the code. (Yes, this is another argument in favor of writing less code.)

  3. There are not enough qualified eyeballs to look at the code. Sure, the overall number of programmers is slowly growing, but what percent of those programmers are skilled enough, and have the right security background, to be able to audit someone else's code effectively? A tiny fraction.

Even if the code is 100% open source, utterly mission critical, and used by major companies in virtually every public facing webserver for customer security purposes, we end up with critical bugs that compromise everyone. For two years!

That's the lesson. If we can't naturally get enough eyeballs on OpenSSL, how does any other code stand a chance? What do we do? How do we get more eyeballs?

The short term answer was:

These are both very good things and necessary outcomes. We should be doing this for all the critical parts of the open source ecosystem people rely on.

But what's the long term answer to the general problem of not enough eyeballs on open source code? It's something that will sound very familar to you, though I suspect Eric Raymond won't be too happy about it.

Money. Lots and lots of money.

Increasingly, companies are turning to commercial bug bounty programs. Either ones they create themselves, or run through third party services like Bugcrowd, Synack, HackerOne, and Crowdcurity. This means you pay per bug, with a larger payout the bigger and badder the bug is.

Or you can attend a yearly event like Pwn2Own, where there's a yearly contest and massive prizes, as large as hundreds of thousands of dollars, for exploiting common software. Staging a big annual event means a lot of publicity and interest, attracting the biggest guns.

That's the message. If you want to find bugs in your code, in your website, in your app, you do it the old fashioned way: by paying for them. You buy the eyeballs.

While I applaud any effort to make things more secure, and I completely agree that security is a battle we should be fighting on multiple fronts, both commercial and non-commercial, I am uneasy about some aspects of paying for bugs becoming the new normal. What are we incentivizing, exactly?

Money makes security bugs go underground

There's now a price associated with exploits, and the deeper the exploit and the lesser known it is, the more incentive there is to not tell anyone about it until you can collect a major payout. So you might wait up to a year to report anything, and meanwhile this security bug is out there in the wild – who knows who else might have discovered it by then?

If your focus is the payout, who is paying more? The good guys, or the bad guys? Should you hold out longer for a bigger payday, or build the exploit up into something even larger? I hope for our sake the good guys have the deeper pockets, otherwise we are all screwed.

I like that Google addressed a few of these concerns by making Pwnium, their Chrome specific variant of Pwn2Own, a) no longer a yearly event but all day, every day and b) increasing the prize money to "infinite". I don't know if that's enough, but it's certainly going in the right direction.

Money turns security into a "me" goal instead of an "us" goal

I first noticed this trend when one or two people reported minor security bugs in Discourse, and then seemed to hold out their hand, expectantly. (At least, as much as you can do something like that in email.) It felt really odd, and it made me uncomfortable.

Am I now obligated, on top of providing a completely free open source project to the world, to pay people for contributing information about security bugs that make this open source project better? Believe me, I was very appreciative of the security bug reporting, and I sent them whatever I could, stickers, t-shirts, effusive thank you emails, callouts in the code and checkins. But open source isn't supposed to be about the money… is it?

Perhaps the landscape is different for closed-source, commercial products, where there's no expectation of quid pro quo, and everybody already pays for the service directly or indirectly anyway.

No Money? No Security.

If all the best security researchers are working on ever larger bug bounties, and every major company adopts these sorts of bug bounty programs, what does that do to the software industry?

It implies that unless you have a big budget, you can't expect to have great security, because nobody will want to report security bugs to you. Why would they? They won't get a payday. They'll be looking elsewhere.

A ransomware culture of "pay me or I won't tell you about your terrible security bug" does not feel very far off, either. We've had mails like that already.

Easy money attracts all skill levels

One unfortunate side effect of this bug bounty trend is that it attracts not just bona fide programmers interested in security, but anyone interested in easy money.

We've gotten too many "serious" security bug reports that were extremely low value. And we have to follow up on these, because they are "serious", right? Unfortunately, many of them are a waste of time, because …

  • The submitter is more interested in scaring you about the massive, critical security implications of this bug than actually providing a decent explanation of the bug, so you'll end up doing all the work.

  • The submitter doesn't understand what is and isn't an exploit, but knows there is value in anything resembling an exploit, so submits everything they can find.

  • The submitter can't share notes with other security researchers to verify that the bug is indeed an exploit, because they might "steal" their exploit and get paid for it before they do.

  • The submitter needs to convince you that this is an exploit in order to get paid, so they will argue with you about this. At length.

The incentives feel really wrong to me. As much as I know security is incredibly important, I view these interactions with an increasing sense of dread because they generate work for me and the returns are low.

What can we do?

Fortunately, we all have the same goal: make software more secure.

So we should view bug bounty programs as an additional angle of attack, another aspect of "defense in depth", perhaps optimized a bit more for commercial projects where there is ample money. And that's OK.

But I have some advice for bug bounty programs, too:

  • You should have someone vetting these bug reports, and making sure they are credible, have clear reproduction steps, and are repeatable, before we ever see them.

  • You should build additional incentives in your community for some kind of collaborative work towards bigger, better exploits. These researchers need to be working together in public, not in secret against each other.

  • You should have a reputation system that builds up so that only the better, proven contributors are making it through and submitting reports.

  • Encourage larger orgs to fund bug bounties for common open source projects, not just their own closed source apps and websites. At Stack Exchange, we donated to open source projects we used every year. Donating a bug bounty could be a big bump in eyeballs on that code.

I am concerned that we may be slowly moving toward a world where given enough money, all bugs are shallow. Money does introduce some perverse incentives for software security, and those incentives should be watched closely.

But I still believe that the people who will freely report security bugs in open source software because

  • It is the right thing to do™

and

  • They want to contribute back to open source projects that have helped them, and the world

… will hopefully not be going away any time soon.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!


What is the meter size barrier? by Astrobites

Fundamental for planet formation: Dust dynamics
Planets have been observed all over the Milky Way. Though more and more exoplanets have been and will be detected, it is still uncertain how planets generally form. Although there is a huge variety of suggested models, they all have one thing in common: They all consider dust dynamics in the protoplanetary disk. In today’s Astrobite, we will explore one of the classical works in the field of planetary formation by Stuart J. Weidenschilling.

The underlying assumption of the study is that planets form via growth of orbiting dust particles in the protoplanetary disk. The author investigates the dynamics of dust particles of different diameters in a gaseous disk. First of all, consider a constant dust particle size. The more gas there is in the disk, the higher the gas viscosity and drag force felt by the dust particles. Physicists often distinguish between two regimes: When the dust particle is larger than the mean distance between gas particles (also known as the mean free path), it is in the so-called Epstein regime; On the other hand, when the dust particle is smaller than the mean free path of the gas particles, it is in the Stokes regime. Weidenschilling considers both regimes and furthermore separates the Stokes regime into three sub-regimes depending on the strength of viscosity. Based on this, he derives equations for the radial velocity of the particles initially surrounding the Sun in circular orbits.

peak_shift_gas_density

Figure 2: Radial velocity distribution for particles of different radii at 1 AU illustrating the shift of the peak to larger radii for increasing gas density.

peak_shift_dust_density

Figure 1: Radial velocity distribution for particles of different radii at 1 AU illustrating the shift of the peak to lower radii for increasing dust density.

 

Meter-size barrier

lifetimes

Figure 3: Lifetimes for particles of different sizes depending on their radial distance from the Sun.

Weidenschilling finds a very interesting result. The maximum radial speed (also called drift speed) is independent of the assumed gas mass in the protoplanetary disk. With respect to this, the maximum speed of the particle is the same regardless whether it is in the Epstein or Stokes regime. However, the author finds a strong correlation between dust particle size and radial speed as seen in figure 1 and figure 2 of this Astrobite (figure 3 and figure 4 in the original paper). On the one hand, very small particles have low drift speeds because they basically follow the mean motion of the gas. On the other hand, if the objects are big enough the gas cannot disturb their orbits efficiently and they only drift towards the central star very slowly, too. Thus, there is a maximum drift speed for particles in between the two extremes. Increasing the dust density causes a shift of the peak to smaller particle sizes, while increasing the gas density shifts the peak to higher values. However, the shape of the function of drift speed versus dust particle size remains roughly the same on the logarithmic scale for most probable gas and dust densities, Weidenschilling finds an important result: Among all particles, meter size particles have highest drift speeds of about 100 m/s.

The author expands the study to further calculate the drift speeds for different orbital distances. From the derived velocity distribution, he finally calculates typical lifetimes for particles of different radii, which are shown in Figure 3 (Figure 6 in the original paper). The fact that drift speeds are highest for meter-size bodies marks a crucial limit for planet formation; the so called meter-size barrier. Particles of meter-size at orbits smaller than 7 AU fall into the Sun already after less than 1000 years. That means that planets have to grow from meter-size bodies either extremely quickly at small orbits or they have to grow at orbits further outside, and migrate inwards at later times. However, such high speeds of up to 100 m/s make it generally very challenging to grow to larger than meter sized objects since colliding speeds of this order will rather destroy the bodies via fragmentation or erosion.

Now, if you hear anybody talking about the meter-size barrier, you know what’s going on. Obviously, the model is simplified in many sense, as for instance; it assumes constant particle sizes and ignores the growth of particles, disk asymmetries, internal disk turbulence and assumes simplified power laws for temperature and pressure. Nevertheless, even 38 years after publication of the study, there is still no commonly accepted solution to circumvent the barrier – though there are many different suggestions. Stay tuned if you are interested in planet formation since you can read more about possible solutions to circumvent the problem in future Astrobites.


Subscriptions (feed of everything)


Updated using Planet on 26 April 2015, 05:48 AM