Francis’s news feed

This combines together on one page various news websites and diaries which I like to read.

May 25, 2015

Real-time sunset on Mars by The Planetary Society

Pause your life for six minutes and watch the Sun set....on Mars. Thank you, Glen Nagle, for this awe-inspiring simulation based on Curiosity's sol 956 sunset images.


May 23, 2015

Cosmic Reionization of Hydrogen and Helium by Astrobites

In a time long long ago…

The story we’re hearing today requires us to go back to the beginning of the Universe and explore briefly its rich yet mysterious history. The Big Bang marks the creation of the Universe 13.8 billion years ago. Seconds after the Big Bang, fundamental particles came into existence. They smashed together to form protons and neutrons, which collided to form the nuclei of hydrogen, helium, and lithium. Electrons, at this time, were wheezing by at such high velocities to be captured by their surrounding atomic nuclei.  The Universe was ionized and there were no stable atoms.

The Universe coming of age: Recombination and Reionization

After 300,000 or so years later, the Universe had cooled down a little bit. The electrons weren’t moving as fast as before and could be captured by atomic nuclei to form neutral atoms. This ushered in the era of recombination and propelled the Universe toward a neutral state. Structure formation happened next, where some of the first structures to form are thought to have been quasars (actively accreting supermassive black holes), massive galaxies, and the first generation of stars (population III stars). The intense radiation from incipient quasars and stars started to ionize the neutral hydrogen in their surroundings, beckoning the second milestone of the Universe known as the epoch of reionization (EoR). Recent cosmological studies suggested that the reionization epoch began no later than redshift (z) ~ 10.6, corresponding to ~350 Myr after the Big Bang.

To probe when reionization ended or completed, we can look at the spectra of high-redshift quasars and compare them with that of low-redshift quasars. Figure 1 shows this comparison. The spectrum of a quasar at z ~ 6 shows almost zero flux in the region of wavelengths shorter than the quasar’s redshifted Lyman-alpha line. This feature is known as the Gunn-Peterson trough and is caused by the absorption of the quasar light when it travels through the neutral space and gets absorbed by neutral hydrogen. Low-redshift quasars do not show this feature as the hydrogen along the path of the quasar light is already ionized. Quasar light does not get absorbed and can travel unobstructed to our view. The difference in the spectra of low- and high-redshift quasars suggests that the Universe approached the end of reionization around z ~ 6, corresponding to ~1 Gyr after the Big Bang. (This astrobite provides a good review of reionization and its relation to quasar spectum.)

qso

Fig 1 – The top panel is a synthetic quasar spectrum at z = 0, compared with the bottom panel showing the spectrum of the current known highest redshift quasar ULAS J112001.48+064124.3 (hereafter ULAS J1120+0641) at z ~ 7.1. While the Lyman-alpha line of the top spectrum is located at its rest-frame wavelength of 1216 nm, it is very redshifted in the spectrum of ULAS J1120+0641 (note the scale of the wavelengths). Compared to the low-redshift spectrum, there is a rapid drop in flux before the Lyman-alpha line for ULAS J1120+0641, signifying the Gunn-Peterson trough. [Top figure from P J. Francis et al. 1991 and bottom figure from Mortlock et al. 2011]

 

Problems with Reionization, and a Mini Solution

The topic of today’s paper concerns possible ionizing sources during the epoch of reionization, which also happens to be one of the actively-researched questions in astronomy. Quasars and stars in galaxies are the most probable ionizing sources, since they are emitters of the Universe most intense radiation (see this astrobite for how galaxies might ionize the early Universe). This intense radiation falls in the UV and X-ray regimes and can ionize neutral hydrogen (and potentially also neutral helium, which requires twice as much ionizing energy). But, there are problems with this picture.

First of all, the ionizing radiation from high-redshift galaxies are found to be insufficient to maintain the Universe’s immense bath of hydrogen in ionized state. To make up for this, the fraction of ionizing photons that escape the galaxies (and contribute to reionization) — known as the escape fraction — has to be higher than what we see observationally. Second of all, we believe that the contribution of quasars to the ionizing radiation becomes less important at higher and higher redshifts and is negligible at z >~ 6. So, we have a conundrum here. If we can’t solve the problem of reionization with quasars and galaxies, we need other ionizing sources. The paper today investigates one particular ionizing source: mini-quasars.

What are mini-quasars? Before that, what do I mean when I say quasars? Quasars in the normal sense of the word usually refer to the central accreting engines of supermassive black holes (~109  Msun) where powerful radiation escapes in the form of a jet. A mini-quasar is the dwarf version of a quasar. More quantitatively, it is the central engine of an intermediate-mass black hole (IMBH) that has a mass of ~ 102 – 105 Msun. Previous studies hinted at the role of mini-quasars toward the reionization of hydrogen; the authors in this paper went an extra mile and studied the combined impact of mini-quasars and stars not only on the reionization of hydrogen, but also on the reionization of helium.  Looking into the reionization of helium allows us to investigate the properties of mini-quasars. Much like solving a set of simultaneous equations, getting the correct answer to the problem of hydrogen reionization requires that we also simultaneously constrain the reionization of helium.

The authors calculated the number of ionizing photons from mini-quasars and stars analytically. They considered only the most optimistic case for mini-quasars where all ionizing photons contribute to reionization, i.e. the escape fraction fesc, BH = 1. Since the escape fraction of ionizing photons from stars is still poorly constrained, three escape fractions fesc are considered. Figure 2 shows the relative contributions of mini-quasars and stars in churning out hydrogen ionizing photons as a function of redshifts for different escape fractions from stars.  As long as fesc is small enough, mini-quasars are able to produce more hydrogen ionizing photons than stars.

fig1

Fig 2 – Ratio of the number of ionizing photons produced by mini-quasars relative to stars (y-axis) as a function of redshifts (x-axis). Three escape fractions of ionizing photons fesc from stars are considered. [Figure 2 of the paper]

Figure 3 shows the contributions of mini-quasars and mini-quasars plus (normal) quasars toward reionization of hydrogen and helium. Mini-quasars alone are found to contribute non-negligibly (~ 20%) toward hydrogen reionization at z ~ 6, while contribution from quasars starts to become more important at low redshifts . The combined contribution from mini-quasars and quasars is observationally consistent with when helium reionization ended. Figure 4 shows the combined contribution of mini-quasars and stars to hydrogen and helium reionization. The escape fraction of ionizing photons from stars significantly affects hydrogen and helium reionizations, ie they influence whether hydrogen and helium reionizations end earlier or later than current theory.

fig2

Fig 3 – Volume of space filled by ionized hydrogen and helium, Qi(z), as a function of redshift z. The different colored lines signify the contributions of mini-quasars (IMBH) and quasars (SMBH) to hydrogen and helium reionizations. [Figure 3 of the paper]

fig3

Fig 4 – Volume of space filled by ionized hydrogen and helium, Qi(z), as a function of redshift z. The two panels refer to the different assumptions on the mini-quasar spectrum, where the plot on the bottom is the more favorable of the two. The different lines refer to the different escape fractions of ionizing photons from stars that contribute to hydrogen and helium reionizations. [Figure 4 of the paper]

The authors caution against a couple of caveats in their paper. Although they demonstrate that contribution from mini-quasars is not negligible, this is only for the most optimistic case where all photons from the mini-quasars contribute to reionization. The authors also did not address the important issue of feedback from accretion onto IMBHs, which regulates BHs growth and consequently determines how common mini-quasars are. The escape fractions from stars also need to be better constrained in order to place a tighter limit on the joint contribution of mini-quasars and stars to reionization. Improved measurements of helium reionization would also help in constraining the properties of mini-quasars. Phew….sounds like we still have a lot of work to do. This paper presents some interesting results, but we are definitely still treading on muddy grounds and the business of cosmic reionization is not any less tricky than we hope.

 


Tons of fun with the latest Ceres image releases from Dawn by The Planetary Society

Fantastic new images of Ceres continue to spill out of the Dawn mission, and armchair scientists all over the world are zooming into them, exploring them, and trying to solve the puzzles that they contain.


May 22, 2015

In pursuit of a good temperature measurement by Goatchurch

The question is: Is it be possible to measure the temperature of the air in a thermal from a hang-glider?

A really good fast responsive thermometer that isn’t degraded by noise is probably going to be useful.

I have been entertaining the unconventional notion that one ought to observe the character of the noise before attempting to filter it out — particularly if it appears that all the errors are on one side due to voltage drops as other sensor circuits take readings on their own independent schedules. (No I do not believe there is any way to synchronize every one of them simultaneously.)

adccircuit

So, as I did with the barometer, I created an electrically isolated circuit running off its own battery and Trinket ATTiny that communicates its data via an optocoupler in 6-bit triples that are received using a timed interrupt pin. Although I could reuse a lot of code, it still took ages to debug and get the circuitry right. The basics are the analog TMP36 and the 16 bit analog to digital converter which in theory gives a resolution of about 0.003degC.

Also, nobody else believes that running this system off its own separate power supply is right way to do it because “a voltage regulator will make everything smooth”.

Not so according to the



MIC5255 data sheet which is the type chosen for the Trinket
adcreg
Clearly, when there is a power surge into a device, the only way this information can be communicated to the regulator is via a change in the voltage (which the regulator quickly compensates once it begins delivering more power). Accordingly, that is the voltage variation that’s going to screw up your analog temperature voltage measurement if it occurs at the wrong time. Doesn’t matter if you put a capacitor there. The voltage still has to drop in order for the demand to be communicated. If there was a separate wire going to the voltage regulator from each device informing it when to deliver more power, then perhaps it could be logically possible to keep the power line perfectly stable. But there isn’t such a wire, so it isn’t.

Never fear. The bodging will commence once all other viable and ugly options have been exhausted.

Having isolated the circuit, I was able to produce a temperature graph like so:
adc1
The 3 horizontal lines are degreesC, and the sample rate is approx 10 times a second.

This is what happens when I breathed on the sensor, where you can see the exponential rise in temperature, followed by the exponential decay back to the room temperature.

Unfortunately that down-spike down every 7 or so readings offends me. What’s causing it?

The 16 bit ADC allows for all kinds of configurations.

While I’ve programmed the main Trinket microcontroller to take a reading about every 100milliseconds, the ADC has been programmed to continuously take readings at the rate of 128 samples per second. Could it be that there’s some sort of beat between these two frequencies wherein occasionally the read sample command, which draws some power, is issued at exactly the moment when a sample is taken?

My calculations were inconclusive. But I was able to change the wave form and error consistencies by varying the frequencies.

Here we get a kind of 6 reading oscilation:
adc2

I tried making the readings as fast as possible to see if there was a shape to the distortion, and got one with alternating highs and lows.
adc3

This one has an interesting saw-tooth pattern. Someone with some deep knowledge could probably come up with an immediate diagnosis from this. (No I don’t think it’s from any radio frequency interference!)
adc6

Here’s another one where it seems to enter and exit a phase lock.
adc4

I tried the single shot readings (non-continuous mode), but that didn’t give any joy.

I also tried putting the ADC frequency very high to make dozens of readings during the trinket cycle which would be averaged or maximized to remove the spikes, but that didn’t work either. (No idea why not.)

The only non-nasty-noise case was when I set the ADC frequency down to 8 samples-per-second, but continued to take readings at around 20 times a second, so that most of them were the same. In this case there were no spikes.
adc7

Setting it up to 16 samples-per-second and it’s back to tight zig-zags (plotted below for contrast).
adc8
I’ve also shown the barometric readings (in red) when I tested the system with a quick ride in the elevator.

I am satisfied that the barometric readings are giving me random noise on either side of the true value, so it’s at the limit of what can be done.

This temperature issue is closed for now, with a new ugly circuit glued onto the outside of the flight box into which it does not fit. What a mess.

As a side note, the barometric readings are quite jagged in flight, suggesting there’s some alternative influence on it aside from altitude.

adcpal
Meantime, the barometer is used for the PALvario haptic actuators. I’ve printed out slightly smaller units (in red) as well as a new box that is better able to protect the cable connections when wedged in the harness.

Who knows? It might just not work!


Super starbursts at high redshifts by Astrobites

Title: A higher efficiency of converting gas to stars push galaxies at z~1.6 well above the star forming main sequence
Authors: Silverman et al. (2015)
First author institution: Kavli Institude for the Physics and Mathematics of the Universe, Todai Institutes for Advanced Study, the University of Tokyo, Kashiwa, Japan
Status: Submitted to Astrophysics Journal Letters

In the past couple of years there has been some observational evidence for a bimodal nature of the star formation efficiency (SFE) in galaxies. Whilst most galaxies lie on the typical relationship between mass and star formation rate (the star forming “main sequence”), slowly converting gas into stars, some form stars at a much higher rate. These “starburst” galaxies are much rarer than the typical galaxy, making up only ~2% of the population and yet ~10% of the total star formation. This disparity in the populations has only been studied for local galaxies and therefore more evidence is needed to back up these claims.

Figure 1: Hubble i band (and one IR K band) images of the seven galaxies studied by Silverman et al. (2015). Overlaid are the blue contours showing CO emission and red contours showing IR emission.  Note that the centre of CO emission doesn't always line up with the light seen in the Hubble image. Figure 2 in Silverman et al (2015).

Figure 1: Hubble i band (and one IR K band) images of the seven galaxies studied by Silverman et al. (2015). Overlaid are the blue contours showing CO emission and red contours showing IR emission. Note that the centre of CO emission doesn’t always line up with the light seen in the Hubble image. Figure 2 in Silverman et al (2015).

In this recent paper by Silverman et al. (2015), the authors have observed seven high redshift galaxies (or large distance, shown in Figure 1) at z~1.6  with ALMA (Atacama Large Millimeter Array, northern Chile) and IRAM (Institut de Radioastronomie Millimétrique, Spain), measuring the luminosity of the emission line from the 2-1 and 3-2 electron orbit transitions in carbon monoxide in each galaxy spectrum. The luminosity of the light from these transitions allows the authors to estimate the molecular hydrogen (H_2) gas mass of each galaxy.

Observations of each galaxy in the near-infrared (NIR; 24-500 μm) with Herschel (the SPIRE spectrograph) allow an estimation of the star formation rate (SFR) from the total luminosity integrated across the NIR range, L_{NIR}. The CO and NIR observations are shown by the red and blue contours respectively, overlaid on Hubble UV (i band) images in Figure 1. Notice how the CO/NIR emission doesn’t always coincide with the UV light, suggesting that a lot of the star formation is obscured in some of these galaxies.

Figure 2. The gas depletion timescale (1/SFE) against the SFR in Figure 2 for the 7 high redshift starburst galaxies in this study (red circles), local starburst galaxies (red crosses) and normal galaxies at \latex $0 < z< 0.25$ (grey points) with the star forming main sequence show by the solid black line.  Figure 3c in Silverman et al. (2015). z 0.25$ (grey points) with the star forming main sequence show by the solid black line. Figure 3c in Silverman et al. (2015)." /> z 0.25$ (grey points) with the star forming main sequence show by the solid black line. Figure 3c in Silverman et al. (2015)." width="400" height="375" />

Figure 2. The gas depletion timescale (1/SFE) against the SFR in Figure 2 for the 7 high redshift starburst galaxies in this study (red circles), local starburst galaxies (red crosses) and normal galaxies at 0 < z< 0.25 (grey points) with the star forming main sequence shown by the solid black line. Figure 3c in Silverman et al. (2015).

With these measurements of the gas mass and the SFR, the SFE can be calculated and in turn the gas depletion timescale, which is the reciprocal of the SFE. This is plotted against the SFR in Figure 2 for the 7 high redshift starburst galaxies in this study (red circles), local starburst galaxies (red crosses) and normal galaxies at 0 < z< 0.25 z 0.25" /> z 0.25" title="0 z 0.25" class="latex" /> (grey points), with the star forming main sequence shown by the solid black line. These results show that the efficiency of star formation in starburst galaxies is highly elevated compared to those residing on the main sequence,but not as high as those galaxies in the local universe.

These observations therefore dilutes the theory of a bimodal nature of star formation efficiency, and leads to one of a continuous distribution of SFE as a function of distance from the star forming main sequence. The authors consider the idea that the mechanisms leading to such a continuous nature of elevated gas depletion timescales could be related to major mergers between galaxies, which lead to rapid gas compression, boosting star formation. This is supported also by the images in Figure 1, which show multiple clumps of UV emitting regions, as seen by the Hubble Space Telescope.

To really put some weight behind this theory though the authors conclude, like most astrophysical studies, that they need a much larger sample of starburst galaxies at these high redshifts ( z ~ 1.6) to determine what the heck is going on.


LightSail Update: All Systems Nominal by The Planetary Society

It's been 24 hours since The Planetary Society’s LightSail spacecraft was deposited into space yesterday afternoon. All systems continue to look healthy.


May 21, 2015

The Evil Business Plan of Evil (and misery for all) by Charlie Stross

I've been away for a few days (family stuff) and the travel gave me a lot of time for thought (you can't type on an inter-city train going at full tilt on our crappy lines). As we seem to be moving into Grim Meathook Future territory with the current government trying to make being poor illegal, I decided to Get With The Program, and invent the most evil business master plan I can think of for capitalizing (heh) on the New Misery.

Note that I am too damned old to play startup chicken all over again, and besides I've got books to write. This is just an exercise in trying to figure out how to make as many people as possible miserable and incrementally diminish the amount of happiness in the world while pretending to be a Force For Good and not actually killing anyone directly—and making money hand over fist. It's a thought experiment, in other words, and I'm not going to do it, and if any sick bastard out there tries to go ahead and patent this as a business practice you can cite this blog entry as prior art.

So. Let me describe first the requirements for the Evil Business Plan of Evil, and then the Plan Itself, in all it's oppressive horror and glory.

Some aspects of modern life look like necessary evils at first, until you realize that some asshole has managed to (a) make it compulsory, and (b) use it for rent-seeking. The goal of this business is to identify a niche that is already mandatory, and where a supply chain exists (that is: someone provides goods or service, and as many people as possible have to use them), then figure out a way to colonize it as a monopolistic intermediary with rent-raising power and the force of law behind it. Sort of like the Post Office, if the Post Office had gotten into the email business in the 1970s and charged postage on SMTP transactions and had made running a private postal service illegal to protect their monopoly.

Here's a better example: speed cameras.

We all know that driving at excessive speed drastically increases the severity of injuries, damage, and deaths resulting from traffic accidents. We also know that employing cops to run speed traps the old-fashioned way, with painted lines and a stop-watch, is very labour-intensive. Therefore, at first glance the modern GATSO or automated speed camera looks like a really good idea. Sitting beside British roads they're mostly painted bright yellow so you can see them coming, and they're emplaced where there's a particular speed-related accident problem, to deter idiots from behaviour likely to kill or injure other people.

However, the idea has legs. Speed cameras go mobile, and can be camouflaged inside vans. Some UK police forces use these to deter drivers from speeding past school gates, where the speed limit typically drops to 20mph (because the difference in outcome between hitting a child at 20mph to hitting them at 30mph is drastic and life-changing at best: one probably causes bruises and contusions, the other breaks bones and often kills). And some towns have been accused of using speed cameras as "revenue enhancement devices", positioning them not to deter bad behaviour but to maximize the revenue from penalty notices by surprising drivers.

This idea maxed out in the US, where the police force of Waldo in South Florida was disbanded after a state investigation into ticketing practices; half the town's revenue was coming from speed violations. (Of course: Florida.) US 301 and Highway 24 pass through the Waldo city limits; the town applied a very low speed limit to a short stretch of these high-speed roads, and cleaned up.

Here's the commercial outcome of trying to reduce road deaths due to speeding: speed limits are pretty much mandatory worldwide. Demand for tools to deter speeders is therefore pretty much global. Selling speed cameras is an example of supplying government demand; selling radar detectors or SatNav maps with updated speed trap locations is similarly a consumer-side way of cleaning up.

And here's a zinger of a second point: within 30 years at most, possibly a lot sooner, this will be a dead business sector. Tumbleweeds and ghost town dead. Self-driving cars will stick to the speed limit because of manufacturer fears over product liability lawsuits, and speed limits may be changed to reflect the reliability of robots over inattentive humans (self-driving cars don't check their Facebook page while changing lanes). These industry sectors come and go.

Can I identify an existing legally mandated requirement with which the public must comply, and leverage it to (a) provide a law enforcement service at one end, (b) a rent-seeking opportunity at the other, and (c) a natural monopoly that I can milk in the middle? And, wearing my Screwtape hat, do so to maximize misery at the same time?

Oh hell yes I can do that ...

The European Commission is a well-intentioned organization where it comes to protecting its citizens and the natural environment. As they say in their environment notes, "Just in terms of household waste alone, each person in Europe is currently producing, on average, half of tonne of such waste [per year]. Only 40 % of it is reused or recycled and in some countries more than 80% still goes to landfill." They have, helpfully, decided to promulgate a set of standards for recycling waste, to be implemented by governments throughout the EU: the EU Waste Framework Directive "requires all member states to take the necessary measures to ensure waste is recovered or disposed of without endangering human health or causing harm to the environment." (From the British Government notes on Waste legislation and regulations.)

Great. Like speed limits, recycling is both inarguably sensible and necessarily mandated by law (because it's a tragedy of the commons issue, like speeding). We can work with this!

Here in Edinburgh, we're supposed to separate out our domestic waste into different bins, for separate collection. We have on-street recycling of packaging materials, and, separately, of paper. We have general refuse, and in some areas biomass/garden refuse (not so much in the city centre where I live). Glass recycling ... should be a thing, but they're struggling to separate it out: ditto metals such as cans. (As for WEEE I have no idea what we're supposed to do, which is kind of worrying.) Let's take Edinburgh as a typical case. The city provides refuse collection as one of its services, and this includes sorting and recycling. By pre-sorting their ejecta, citizens are providing a valuable labour input that increases the efficiency of the recycling process and reduces the overheads for the agencies tasked with shifting our shit.

Now, what happens when the mundane reality of household garbage recycling meets the Internet Of Things and Charlie's Evil Business Plan of Evil (and Misery)?

Well, we know that ubiquitous RFID tags are coming to consumer products. They've been coming for years, now, and the applications are endless. More to the point they can be integrated with plastic products and packaging, and printed cheaply enough that they're on course to replace bar codes.

Embedded microcontrollers are also getting dirt cheap; you can buy them in bulk for under US $0.49 each. Cheap enough to embedd in recycling bins, perhaps? Along with a photovoltaic cell for power and a short-range radio transciever for data. I've trampled all over this ground already; the point is, if it's cheap enough to embed in paving stones, it's certainly cheap enough to embed in bins, along with a short-range RFID reader and maybe a biosensor that can tell what sort of DNA is contaminating the items dumped in the bins.

The evil business plan of evil (and misery) posits the existence of smart municipality-provided household recycling bins. There's an inductance device around it (probably a coil) to sense ferrous metals, a DNA sniffer to identify plant or animal biomass and SmartWater tagged items, and an RFID reader to scan any packaging. The bin has a PV powered microcontroller that can talk to a base station in the nearest wifi-enabled street lamp, and thence to the city government's waste department. The householder sorts their waste into the various recycling bins, and when the bins are full they're added to a pickup list for the waste truck on the nearest routing—so that rather than being collected at a set interval, they're only collected when they're full.

But that's not all.

Householders are lazy or otherwise noncompliant and sometimes dump stuff in the wrong bin, just as drivers sometimes disobey the speed limit.

The overt value proposition for the municipality (who we are selling these bins and their support infrastructure to) is that the bins can sense the presence of the wrong kind of waste. This increases management costs by requiring hand-sorting, so the individual homeowner can be surcharged (or fined). More reasonably, households can be charged a high annual waste recycling and sorting fee, and given a discount for pre-sorting everything properly, before collection—which they forefeit if they screw up too often.

The covert value proposition ... local town governments are under increasing pressure to cut their operating budgets. But by implementing increasingly elaborate waste-sorting requirements and imposing direct fines on households for non-compliance, they can turn the smart recycling bins into a new revenue enhancement channel, much like the speed cameras in Waldo. Churn the recycling criteria just a little bit and rely on tired and over-engaged citizens to accidentally toss a piece of plastic in the metal bin, or some food waste in the packaging bin: it'll make a fine contribution to your city's revenue!

We can also work the other end of the rent pipeline. Sell householders a deluxe bin with multiple compartments and a sorter in the top: they can put their rubbish in, and the bin itself will sort which section it belongs in. Over a year or three the householder will save themselves the price of the deluxe bin in avoided fines—but we don't care, we're not the municipal waste authority, we're the speed camera/radar detector vendor!

There is a side-effect, of course: fly-tipping. But hey, not our problem. And anyway, it's just a sign that our evil scheme is working.

Meanwhile 90% of our waste mountain comes from the business sector, not consumers, but we don't care about that—businesses do not constitute a captive market as their waste collection is already commercialized and outsourced.

Anyway. The true point of this plan is that it's possible to pervert the internet of things to encourage monopolistic rent-seeking and the petty everyday tyranny of regulations designed not to improve our quality of life but to provide grounds for charging fines for petty infringement. Screwtape would be proud, and our investors will be extremely happy.

What other opportunities for using the IoT to immiserate and oppress the general public for pleasure and profit can you think of?


The Chief Internet of Things Officer by Simon Wardley

Back in 2006, I gave a talk at EuroOSCON on Making the Web of Things which covered manufacturing methods from 3D printing to devices connected to services over the internet. I had an interest in this field and the consequences of it in every day life and used to describe its effect through a scenario known as "Any Given Tuesday" which gave a comparison of today's life against the future. I was also involved in a number of side projects (you can't learn unless you get your hands dirty) from a paper book with printed electronics (i.e. turning a paper book into an interactive device) to various animatronic experiments.

For Background (general interest) ...

A background on the combination of physical and digital including future languages e.g. Spimescript is provided here. The presentation from EuroFoo is below.



The scenario for Any Given Tuesday is provided here. There is a slidedeck from 2005 for this and at some point I'll post it, however it's not necessary.

For the interactive book (physical + electronics), then the following video from 2008 gives a good enough description.



You'll also find discussion on the future of books along with various reports of mine (from 2002 to 2006) on 3D printing and techniques behind this e.g.



So, back to today ...

These days we call the Web of Things the Internet of Things. We still haven't invented SpimeScript though some are getting close. We're seeing more interaction in physical devices and the continued growth of 3D printing including hybrid forms of physical and electronics. It's all very exciting ... well, some of it is. I'm a bit of an old hand and so parts of this change (especially endless pontifications by consultants / analysts) tend to send me to sleep - it's a bit like the cloud crowd - some good, some "blah, blah, blah".

However, there's something I do want to point out with this change and why everyone who can should attend the O'Reilly Solid conference

Most organisations are terrible at coping with change. You're not designed to cope. You don't exist in adaptive structures that deal with evolution. You have to bolt on new things as a new structure and somehow muddle through the mess it creates. You're probably doing this now by adding a Chief Digital Officer. You're probably adding on Agile or Lean or even worse yo-yoing from one extreme (e.g. six sigma) to another. You might even have done something daft like organising by dual structure in the "hope" that this fixes you p.s. it won't. Yes, we have extremes but the key is how to organise to include the transition between the extremes.

So, what's this got to do with Solid and IoT.

Well, unfortunately IoT requires a different set of practices (from design to construction), a different set of techniques and a mix of attitude from "pioneer" to "settler". The underlying components might be quite commodity but what is being built with these is often a process of discovery and exploration. Though there are common lessons, there's a very different mindset and value chain relationships to IoT which is built from experience. What I'm saying is Physical + Digital is not the same as Digital.

Now, if you're one of those very lucky organisations that have a strategic CIO then you're ok, they'll adapt and muddle through. If you're not and you've had to bolt on a Chief Digital Officer then you might have a problem. Digital is not the same as IoT and unfortunately I've met quite a few Chief Digital Officers that are about as un-strategic as the CIOs they were mean't to replace. If you've got one of these (and don't be surprised if you do) then you'll going to need a Chief Internet of Things Officer (CITO). As Venture Beat says 'The most important CxO you haven't hired'  and they're spot on, until the next change of this type and the next bolt on.

So, get yourself along to Solid and start scouting. Learn a little about the wonder of the combination of physical & digital and if you're lucky, hire some talent.

Personally, if you want to avoid adding more CxOs then I'd recommend creating an adaptive organisation able to cope with change. But that requires extremely high levels of situational awareness which alone is way beyond most companies. It's also often unnecessary unless you're competing against such adaptive structures (which in the commercial world seems rare). Hence it's usually easier to simply bolt on and deal with some of the conflict that this will create. Cue endless bunfights between CIO vs CDO vs CMO vs CITO and proclamations of death of one over the other.

Of course that means we're going to get endless Chief Internet of Things Officer societies, institutes, awards, proclamations of greatness, the CITO is the new CIO or CDO or whatever along with  blah, blah, blah. That's life.


Merging White Dwarfs with Magnetic Fields by Astrobites

The Problem

White dwarfs, the final evolutionary state of most stars, will sometimes find themselves with another white dwarf nearby. In some of these binaries, gravitational radiation will bring the two white dwarfs closer together. When they get close enough, one of the white dwarfs will start transferring matter to the other white dwarf before they merge. These mergers are thought to produce a number of interesting phenomena. Rapid mass transfer from one white dwarf to the other could cause a collapse into a neutron star. The two white dwarfs could undergo a nuclear explosion as a Type 1a supernova. Least dramatically, these merging white dwarfs could also form into one massive, rapidly rotating white dwarf.

There have been many simulations over the last 35 years of white dwarfs merging as astronomers try to figure out the conditions that cause each of these outcomes. However, none of these simulations have included magnetic fields during the merging process, though it is well known that many white dwarfs have magnetic fields.. This is mostly because other astronomers have just been interested in different properties and results of mergers. Today’s paper simulates the merging of two white dwarfs with magnetic fields to see how these fields change and influence the merger.

The Method

The authors choose to simulate the merger of two fairly typical white dwarfs. They have Carbon-Oxygen cores and 0.625 and 0.65 solar masses. The magnetic fields in the core are 2 x 107 Gauss and 103 Gauss at the surface. Recall that the Earth has a magnetic field strength of about 0.5 Gauss. The temperature on the surface of each white dwarf is 5,000,000 K. The authors start the white dwarfs close to each other (about 2 x 109 cm apart and a period of 49.5 seconds) to simulate the merger.

To keep track of what is happening, the authors use a code called AREPO. AREPO works as a moving mesh code – the highest resolution is kept where interesting things are happening. There have been a number of past Astrobites that have covered how AREPO works and some of the applications to planetary disks and galaxy evolution.

The Findings

Figure 1:

Figure 1: Result from the simulation showing how the temperature (left) and magnetic field strength (right) change over time (top to bottom). We are looking down on the merger from above.

Figure 1 shows the main result from the paper. The left column is the temperature and the right column in the magnetic field strength at various times during the simulation. By 20 seconds, just a little mass is starting to transfer between the two white dwarfs.  Around 180 seconds, tidal forces finally tear the less massive white dwarf apart. Streams of material are wrapping around the system. These streams form Kelvin-Helmholtz instabilities that amplify the magnetic field. Note how in the second row of Figure 1, the streams with the highest temperatures also correspond to the largest magnetic field strengths. The strength of the magnetic field is changing quickly and increasing during this process. By 250 seconds, many of the streams have merged into a disk around the remaining white dwarf.

By 400 seconds (not shown in the figure), the simulations show a dense core surrounded by a hot envelope. A disk of material surrounds this white dwarf. The magnetic field structure is complex. In the core, the field strength is around 1010 Gauss, significantly stronger than at the start of the simulation. The field strength is about 109 Gauss at the interface of the hot envelope and the disk. The total magnetic energy grows by a factor of 109 from the start of the simulation to the end.

These results indicate that most of the magnetic field growth occurs from the Kelvin-Helmholtz instabilities during the merger. The field strength increases slowly at first, then very rapidly before plateauing out. The majority of the field growth occurs during the tidal disruption phase (between about 100 and 200 seconds in the simulation). Since accretion streams are a common feature of white dwarf mergers, these strong magnetic fields should be created in most white dwarf mergers. As this paper is the first to simulate the merging of two white dwarfs with magnetic fields, future work should continue to refine our understanding of this process and observational implications.

 


LightSail Sends First Data Back to Earth by The Planetary Society

LightSail is sending home telemetry following a Wednesday commute to orbit aboard an Atlas V rocket.


Liftoff! LightSail Sails into Space aboard Atlas V Rocket by The Planetary Society

The first of The Planetary Society’s two LightSail spacecraft is now in space following a late morning launch from Cape Canaveral Air Force Station in Florida.


May 20, 2015

Stealing Hot Gas from Galaxies by Astrobites

Title: Ram Pressure Stripping of Hot Coronal Gas from Group and Cluster Galaxies and the Detectability of Surviving X-ray Coronae
Authors: Rukmani Vijayaraghavan & Paul M. Ricker
First Author’s institution: Dept. of Astronomy, University of Illinois at Urbana-Champaign
Status: Accepted to MNRAS

Making crass generalizations, galaxies are really just star forming machines that come in a variety of shapes and sizes. They form in dark matter halos, and grow over time as they accrete gas and merge with other galaxies. Left to their own devices, they would slowly turn this gas into stars, operating in a creative balance between the outflow of gas from galaxies, through supernova driven winds, for example, and the ongoing inflow of gas from the galaxy’s surroundings. Eventually, galaxies will convert nearly all of their gas into stars, and “die” out. However, galaxies often do not evolve in isolation. This is the case for galaxies in groups and clusters of galaxies, and the effects of those environments prove detrimental to our star forming machines.

In this simple picture, cold gas contained within the disks of galaxies acts directly as star formation fuel. However, galaxies are also surrounded by hot, gaseous coronae (think millions of degrees), that act as reservoirs for gas that may eventually cool, fall into the galaxy, and form stars. Removing the cold gas will immediately stop star formation, while removing the hot coronae will quietly shut off star formation, cutting off the supply of more gas. Rather dramatically, this removal of hot gas and delayed shut off is referred to as “strangulation“. In galaxy groups and clusters, both cold and hot gas can be violently removed from galaxies as they travel through the hot gas interspersed throughout the group/cluster (called the intracluster medium, or ICM). This violent removal of gas is known as ram pressure stripping (RPS), which again, can lead to strangulation. However, some galaxies survive this process, or are only partially affected. The authors of today’s astrobite focus on the strangulation process, and how the hot corona of galaxies are removed, and how they may even survive as galaxies move through clusters.

Simulating Galactic Strangulation

The authors construct two hydrodynamical simulations, one each for a galaxy group and galaxy cluster containing 26 and 152 galaxies respectively. Each galaxy in their simulation has a mass greater than 109 solar masses, and the group/cluster has a total mass of 3.2×1013 / 1.2×1014 solar masses. They make some idealizations in order to more cleanly isolate the effects of the group/cluster environment on their galaxies. Their galaxies all exist in spherical dark matter halos, with the dark matter implemented using live particles. The authors only include the hot gaseous halos, also spherical, for their galaxies, and leave out the cold gas, as their focus is the strangulation process.

panel

Figure 1: Images of the galaxy group (top) and galaxy cluster (bottom) as viewed through gas temperature projections, for the initial conditions (left) and after about 2 Gyr of evolution (right). The galaxies stand out as the denser, blue dots surrounded by nearly spherical gaseous halos. On the right, there appear to be many less galaxies in both cases, as their hot gas has been removed, and the hot gas remaining in galaxies is severely disrupted.  (Source: FIgures 3 and 5 of Vijayaraghavan and Ricker 2015)

Figure 1 shows temperature projections of the initial conditions, and the group/cluster after 2.0 Gyr of evolution. The blue circles are galaxies, surrounded by warmer gaseous halos; the images are centered on the centers of the even hotter gas contained within the galaxy group/cluster. As shown, there is significant hot corona gas loss over time from the galaxies, yet some are still able to hold onto their gas. This is shown even more quantitatively in Figure 2, giving the averaged mass profiles of galaxies (current mass over initial mass as a function of radius) in the group (left) and the cluster (right) over time. Their results show that about 90% of the gas bound to galaxies is removed after 2.4 Gyr, and that the process is generally slower in groups than clusters.

gasloss

Figure 2: Averaged gas mass profiles as a function of radius for the galaxies in the group (left) and the cluster (right) over time. The profiles are normalized by the initial gas profile. The solid lines show all galaxies, while the dashed lines give galaxies with initial masses greater than 1011 solar masses, and dash-dotted those with initial masses less than 1011 solar masses (Source: FIgure 6 of Vijayaraghavan and Ricker 2015)

Bridging Simulations and Observations

Aside from studying the stripping and gas loss processes in detail, the authors seek to make observational predictions for what (if any) of the hot gas can be observed in galaxies in groups and clusters. To do this, the authors take their simulations and make synthetic X-ray observations of their group and cluster galaxies. Figure 3 gives an example of one of these maps for the galaxy group at about 1 Gyr. Shown is the temperature projection on left (similar to Figure 1), next to a 40 ks and 400 ks exposure mock X-ray observation.

xray

Figure 3: Temerature projection (left) and mock X-ray emission maps (center and right) for the galaxy group at about 1 Gyr. The maps are shown for a 40 kilosecond (ks) and 400 ks exposure time. As shown, some of the galaxy corona gas is visible for long enough X-ray exposures. The red central dot in the X-ray images is the X-ray emission from the much hotter gas belonging to the galaxy group. (Source Figure 10 of Vijayaraghavan and Ricker 2015)

The authors find that the tails of hot gas coming off of stripped galaxies, and the remaining hot gas bound to the galaxies, can be observed for 1 – 2 Gyr after they first start being stripped using a 40 kilosecond (ks) exposure with the Chandra X-ray telescope. This is a fairly long exposure, however, and the authors suggest that the hot gas can be detected by making multiple, shorter observations of many galaxies in clusters and stacking the resulting images together. As suggested by the rate of gas stripping between galaxy groups and clusters, they suggest a successful detection is more likely in galaxy groups, where stripping is a slower process.

Towards Understanding Strangulation

This work dives into how a galaxy’s environment affects its evolution. Galaxies’ movement through galaxy groups and galaxy clusters throws a cog into these star forming machines. This work presents some exciting suggestions that, by combining current and upcoming X-ray observations, we may be able to directly detect this strangulation in action.


Two Months from Pluto! by The Planetary Society

Two months. Eight and half weeks. 58 days. It's a concept almost too difficult to grasp: we are on Pluto's doorstep.


Rover eyes on rock layers on Mars by The Planetary Society

Digging in to mission image archives yields similar images of layered Martian rocks from very different places.


[Updated] House NASA Funding Bill Proposes a Fantastic Budget for Planetary Science by The Planetary Society

The House Appropriations Committee released their vision for NASA's 2016 budget this week, which includes significant increases for the SLS and Planetary Science, but cuts Commercial Crew and Earth Science funds.


In Pictures: LightSail’s Rocket Rolls to the Launch Pad by The Planetary Society

Pictures of The Planetary Society’s LightSail spacecraft rollout to the launch pad at Cape Canaveral Air Force Station’s Launch Complex 41.


Timeline: LightSail's First Day in Space by The Planetary Society

It's the day before launch at Cape Canaveral. Here's a timeline of events for LightSail's first day in space.


May 19, 2015

The probable problem with Watson by Simon Wardley

I like IBM Watson. It seems likely to be a roaring success. But there is a future problem in my opinion.

The issue with such 'reasoning' and interpretive capabilities (unlike machine learning through patterns) is that they are relatively novel and we're still in a mode of discovery. During this time Watson will grow, it will become more refined and IBM will build a successful product and rental business.

However, around 2030 (see figure 1) then such 'intelligent' agents will be on the cusp of industrialisation (the 'war' phase of economic change, the shift from product to commodity). New entrants not encumbered by an existing business will launch a range of highly industrialised services (most likely Amazon - and yes, I see no reason why they won't be around by then, bigger than ever). In the following 10-15 years the previous players all encumbered by existing and successful businesses will be taken out of the game unless they adapt.

Figure 1 - The Wars


This pattern (known as the peace, war and wonder cycle) occurs relentlessly throughout history. Those who develop and build the field rarely reap the rewards of industrialisation. IBM is no different. As has happened to it in the past, IBM tends to develop the field, grow a successful business and then is forced to reinvent itself.

The timeframe today for this process (which is affected by commoditisation of means of communication) is currently around 20-30 years from genesis to point of industrialisation (the onset of war) and 10-15 years for the previous industry to be taken down.

To counter this pattern then you need to have enough situational awareness that you can industrialise yourself before your opponents do. There are two forms of disruption and this type can be anticipated through awareness and weak signals. However such awareness which is created through the use of maps and weak signal detection is rare and normally confined to Government circles.

Mapping of economic landscapes is itself on a journey and is only ten years down the path. It'll be another 10-20 years before maps get to the point of industrialisation in which case businesses built around sharing maps, use of industry maps and changes to the whole field of strategic play may well occur. Those too will take a further 10-15 years to develop.

Unfortunately, at the point IBM will need to industrialise Watson then it will have many of the 16 different forms of inertia to change (including pre-existing business) but in all likelihood it'll lack the situational awareness needed to know when to industrialise. The odds don't appear in its favour here.

If you want a timeline then .... well there isn't one. The future depends upon actors actions, it's an uncertainty barrier we can't see through and you can't pre-determine action but instead only look at possible scenarios.

In one scenario ...

IBM by 2025 should be a very different beast due to the changes in cloud. It'll be a shadow of its former might in infrastructure but still significant in terms of specific application services (e.g. healthcare) and the platform arena of cloud (e.g. Bluemix Cloud Foundry). The idea that IBM built hardware will become a fading memory. The building of an ecosystem around Watson will help it to grow and evolve and you should have no doubt that IBM will create a successful product and rental business in this space. This is why I've said that you shouldn't count them out. However the problem with IBM is its constant reinvention. It's a great company but it fails to learn the game and tends to get industrialised out of a space. It might like to view this as exiting a commoditising space but those commoditising spaces (e.g. cloud) can and do provide huge ecosystem advantages.  

The industrialisation of the intelligent agent space will kick in (around 2030), started by a player like Amazon. In the first instance IBM will dismiss the new approaches. By 8-10 years (2040) then the new player services will be less than 3% of the market. By 2045, they will be 50%+. The game will be over.

Watson will have created new glory but it will now be in decline as industrialised forms take over. Watson is also likely to have sapped strength from the platform effort whilst building inertia to its own industrialisation. IBM will be facing a new crisis of reinvention against much larger and much deadlier foes. It won't be the case of the giant IBM being taken on and losing to the minnow of Amazon ($125 Bn vs $15 Bn, as it was in 2006) but instead the minnow of IBM losing a core revenue stream to the giant of Amazon.

Of course, the future depends upon actors actions and it doesn't have to be like this. 

In another scenario, IBM could play this game to its favour but to do so it would need to capture those industrialised spaces and play fragmentation games against competitors. It will need to exploit constraints and the platform play has to be at the heart of this. To achieve this, it should already be working on how to industrialise Watson into a commodity service. This is more than just providing a rental service but looking at how to use Watson to create that industrialised future. It should be building upon competitors services and using this to cut them out of data flows and diminish their ability to sense the future. This a tough road to travel because success with products and rental services will constantly provide data against making such a move.

So which way will it go?

Alas, the incentive just isn't there for long term play with most commercial companies. Many of us will retire and some of us will have died through old age before this game plays out. However, if we could travel in time then I'd suspect we will see a new group of IBM execs facing a future of reinventing IBM as another pillar gets taken away. Of course, every now and then, you get someone who can play the game to its advantage - a Jeff Bezos, a Tim Cook etc. Every now and then, even companies get lucky.

However, the one thing I've learned after 25 years in business is that companies are singularly bad at long term gameplay and constantly get taken out by changes that can be anticipated and defended against. The only thing I know of which is worse than long term company gameplay is my ability to bet correctly on individual actors' actions. 

That said, if I was a betting man (and no, I'm not) then I'd plummet for a cup of tea on 2040-2045 as when IBM finally expires. For me Watson is not only its future but its doom. Of course, I could be wrong ... I frequently am.


Why Are We Here? by Albert Wenger

We spend a lot of time in tech inventing and building new things. Some people are perfectly happy doing so without needing a deeper reason — some simply want success, others wealth, and many are excited about the potential to make the world a better place. Still I am struck by an undercurrent of dissatisfaction even among people who have accomplished a lot. I attribute that to the lack of a deeper purpose. Few people in tech seem to accept an easy religious answer to the question of why we are here. I have struggled with that myself but feel comfortable with what I believe now.

If you have followed my blog for a while you know that I have written about personal change in the past. Part of that exploration for me has been reading key works in Hinduism and Buddhism. One of the foundational precepts of Buddhism is that everything is ephemeral. Human pain comes from our failure to accept this impermanence. We become attached to people or things and when they inevitably disappear we suffer. I have found this to be a profound insight with powerful consequences for everyday life. Letting go of attachments is the way to overcome most if not all of our fears of the future and regrets of the past.

Yet I also believe that there is an important exception: human knowledge. I have previously argued that knowledge is the information that we as humans choose to replicate over time. It thus includes historical accounts, scientific knowledge and cultural artifacts (including literature, music, art, etc). Knowledge is unique to humans at least here on our planet. Other species don’t have externalized information that outlives them individually (I say externalized to contrast knowledge with DNA).

Human knowledge in principle has the potential to be eternal. It could exist as long as the universe does (and as far as I know we aren’t sure yet whether that will come to an end). Knowledge could even outlive humanity and still be maintained and developed further by some artificial or alien intelligence that succeeds us. Although I would prefer for the contributors to include future generations of humans.

For me the very existence and possibility of human knowledge provides the answer to the question of why we are here and what we should try to accomplish in life. We should endeavor to contribute to knowledge. Given my definition this can mean a great many things, including teaching and making music and taking care of others. Anything that either adds to or reproduces knowledge is, so far, a uniquely human activity and why we are here (“adding” includes questioning or even invalidating existing knowledge).

Once our basic needs are taken care of I believe we should devote much of our time to knowledge. We can still do things like create new products or start new companies (or invest in them). But we shouldn’t be mindless consumers of stuff or information. And we should focus on products or services that either contribute directly to knowledge or help others do so including by helping take care of basic needs (food, shelter, clothing, health, transportation, connectivity). This is also why I support the idea of a universal basic income.

Now at first blush the focus on knowledge sounds value free. What if you are inventing the nuclear bomb or worse? I have written about how values are important to guide what systems we build. I am convinced that many (and maybe all) of the values I believe in can be derived from the foundational value of knowledge, including, for example, conservation of the environment. I will write more on that in future posts.

This view of the meaning of life is what works for me personally and I am sharing it because it might work for others also. In doing so I am being consistent with the very belief I am describing. If these ideas have merit they will get replicated by others and carried forward over time and have a chance to become part of knowledge itself.

It is also likely that others have thought of this approach to the meaning of life before me. Knowledge is far vaster than what any one person can possibly know. And so as always when writing, I look forward to comments that point me to related work and people.


The Next Transit Hunters by Astrobites

  • Title: The Next Great Exoplanet Hunt
  • Authors: Kevin Heng and Joshua Winn
  • First Author’s Institution: University of Bern
  • Published in American Scientist

How do you answer when the theorist frowns and says: “Exoplanetary science isn’t fundamental, it’s just applied physics—no offense, of course”. Your reply might be: “None taken Dr. X, yes, we are not expecting to gain insights into grand unified theories by hunting for exoplanets, but the stakes are nevertheless high. We are on the verge of the Copernican revolution all over again: we could find signs of life out there, however small, removing humanity from the center of the biological Universe. Are you with us?”

Indeed, the stakes are high, and the hunt is on. For the last three decades there has been a rush of activity to find exoplanets. We have found many. The most successful method to date is the transit method. Kepler, the most successful transit mission to date, has confirmed the existence of over 1000 exoplanets, two thirds of currently known planets.

According to Heng & Winn, the authors of today’s paper, the long-term strategy of exoplanet hunting is clear. First you find them. Second, you characterize them. Third, you search for biomarkers in their atmospheres. The first two steps both have a number of maturing methods, but we are still finding our footing with the third step. In this paper Heng & Winn largely focus on transiting planets—planets whose atmospheres we can study for biomarkers. Why? Read on.

Space-based transits versus ground based?

Initially, exoplanet transits were studied by ground based observatories. They have problems: the Sun is periodically in the way (usually during the day), and Earth’s atmosphere interferes with the observations. These problems can be circumvented by launching telescopes into space, see figure below. In space, the precision is only limited by fundamental photon counting noise; we can’t ask for anything better. Granted, launching things into space is expensive, but is getting cheaper through help from the private sector. Notwithstanding, ground-based surveys will still most likely continue to give you the most bang for your buck, and will continue to play a strong complementary role to space based transit missions in the future.

The upper panel shows a planetary transit observed from the ground with a 1.2m diameter telescope, while the lower panel shows a transit observed with Kepler (1.0m). The precision in space is higher: we don’t have to deal with the atmosphere, and our measurements are not interrupted periodically with the Sun rising every day. Figure 1 from the paper.

Figure 1: Precision in space is better. The upper panel shows a planetary transit observed from the ground with a 1.2m diameter telescope, while the lower panel shows a transit observed with Kepler (1.0m). The precision in space is higher: we don’t have to deal with the atmosphere, and our measurements are not interrupted periodically with the Sun rising every day. Figure 1 from the paper.

Characterizing exoplanetary atmospheres

The atmospheres of transiting planets can be studied and analyzed for biomarkers via transit spectroscopy. There are two main ways. First, during a transit, some of the starlight can shine through the atmosphere of the planet. We can then look for atmospheric absorption features, by contrasting the observed spectrum while transiting, and while not. The second way is to study occultations. The planet itself reflects light from it’s host star, which contains information about its atmospheric structure. We can infer how much light is reflected by detecting the drop in brightness as the planet travels behind the star.

However, not all exoplanets are created equal for atmospheric characterization. This characterization is easiest for Hot Jupitersbig Jupiter-size planets that orbit close to their host starwhich tend to have puffy atmospheres. Heng & Winn note a stark contrast between the impressive sounding 1500 confirmed exoplanets, and the relatively small number of exoplanets we can currently meaningfully characterize the atmospheres of: only about a dozen (take a look at Figure 2). Most of them are hot gas giants, flaming puffy planets significantly larger than the Earth: not habitable. We want to change that, and go for the gold: habitable planets.

The diagram shows stellar V magnitude, or brightness, versus the strength of a planetary transit signal at 1 atmospheric scale height, a measure of how easy it is to characterize an exoplanet’s atmosphere. The easiest atmospheres are to the upper right; the hardest to the lower left. The curves show where Hubble (red), and JWST (blue) can obtain a spectrum with resolution R, and a signal-to-noise-ratio of S/N. We see that JWST will enable us to probe a lot more atmospheres! Figure 2 from the paper.

Figure 2: Exoplanetary atmospheres are hard to study. The diagram shows stellar V magnitude, or brightness, versus the strength of a planetary transit signal at 1 atmospheric scale height, a measure of how easy it is to characterize an exoplanet’s atmosphere. The easiest atmospheres are to the upper right; the hardest to the lower left. The curves show where Hubble (red), and JWST (blue) can obtain a spectrum with resolution R, and a signal-to-noise-ratio of S/N. We see that JWST will enable us to probe a lot more atmospheres! Figure 2 from the paper.

Planned Transit-Hunting Machines: Finders, and Characterizers

According to Heng & Winn, the path to finding habitable planets is clear. We need to detect Earth sized planets around the nearest, and brightest stars. These are the systems that maximize the atmospheric signal-to-noise ratio. We could then proceed to search for biosignatures with enough statistics to robustly probe for signs of life. This, however, requires new telescopes to first find them, and then characterize them. What is planned?

First are the transit-finders. Kepler changed the game, but left most of the sky relatively unexplored. Its success has inspired a fleet of transit-hunting successors, space missions along with complimentary efforts on the ground.  Like Heng & Winn, we will focus here on the space missions, some of which are contrasted in the figure below. NASA’s TESS mission is scheduled to be launched in 2017, and will scan the entire sky in a systematic manner, specializing in finding nearby short period planets. Conversely, the European CHEOPS mission, also scheduled to fly in 2017, focuses on studying transits, one star at a time. Later on, in 2024, the PLATO mission is the most ambitious of all. It will borrow hunting strategies from Kepler, TESS and CHEOPS, and aims to build a catalog of true Earth analogs: Earth-like exoplanets orbiting within the habitable zones of Sun-like stars.

Space based missions, like Kepler, TESS, and PLATO, are sensitive to a wide range of exoplanet sizes, while current ground-based surveys can only find the larger sized planets. TESS and PLATO specialize in finding planets all over the the sky, why the original Kepler mission stared deeper and fainter at a single field of view. Orbital periods are not shown in this diagram. LY means one light year. Figure 3 from the paper.

Figure 3: The sensitivity of transit-hunters Space based missions, like Kepler, TESS, and PLATO, are sensitive to a wide range of exoplanet sizes, while current ground-based surveys can only find the larger sized planets. TESS and PLATO specialize in finding planets all over the the sky, while the original Kepler mission stared deeper and fainter at a single field of view. Orbital periods are not shown in this diagram. LY means one light year. Figure 3 from the paper.

Then there are the atmospheric-characterizers. These are the big telescopes that will focus on recording transmission spectra during planetary transits. This will largely fall into the hands of the much anticipated JWST space telescope, and the upcoming extremely large ground based telescopes. Ideally, before these expensive shared-time observatories come online, we would want to have compiled a list of our best candidates: our top 10 sexiest transiting planets. Let’s get cracking!


LightSail Launch Countdown: Ready for Rollout by The Planetary Society

It’s launch week in Cape Canaveral, Florida, where The Planetary Society’s LightSail spacecraft is buttoned up for flight aboard an Atlas V rocket. Liftoff is scheduled Wednesday sometime between 10:45 a.m. and 2:45 p.m. EDT.


May 18, 2015

Thoughtcrime by Charlie Stross

Last week, our newly re-elected Prime Minister, David Cameron, said something quite remarkable in a speech outlining his new government's legislative plans for the next five years. Remarkable not because it's unexpected that a newly formed Conservative government with a working majority would bang the law and order drum, but because of what it implies:

"For too long, we have been a passively tolerant society, saying to our citizens 'as long as you obey the law, we will leave you alone'."

Think about it for a moment. This is the leader of a nominally democratic country saying that merely obeying the law is not sufficient: and simultaneously moving to scrap the Human Rights Act (a legislative train-wreck if ever I saw one) and to bring in laws imposing prior restraint on freedom of political speech (yes, requiring islamists to show the Police everything they say on Facebook before they say it is censorship of political speech, even if you don't like what they're saying).

We've been here before, of course.

Back in 2005, during one of the regular law'n'order circlejerks to which we have grown inured—this one triggered by the terrorist suicide bombings of 7/7 in London—the Labour Party brought in a spectacularly ill-conceived over-reaction in the shape of the Terrorism Act 2006. Among other things, they attempted to give the police the power to detain and question suspects without charge for up to 90 days (in the House of Commons this caused a rebellion, and it was eventually cut to 28 days—still far too long for arrest and interrogation without criminal charges), but moreover, created (Tony Blair's words): "an offence of condoning or glorifying terrorism. The sort of remarks made in recent days should be covered by such laws."

Get that: glorifying terrorism was to become an offense.

We all know of those vile Da'esh beheading videos, which is probably the sort of thing the Home Office had in mind. But the law was drafted so vaguely and broadly that a bunch of unintended consequences emerged. For example, what is "glorification" and what is "terrorism"? Lest we forget, Nelson Mandela was identified as a terrorist. So was that other Nobel Peace Prize winner, Menachem Begin. The current Deputy First Minister of Northern Ireland, with whom Tony Blair was doubtless on a first name basis, spent many years in British prisons for murders he allegedly committed while leading a terrorist organization. Is it "glorifying terrorism" to express happiness at the success of the ANC in forcing the overtly racist system of Apartheid South Africa to the negotiating table?

The law was drafted in such a way that works of fiction fell within its scope. So a group of bolshy, lefty, civil-rights-focussed literary academics with an interest in the SF field got together and published a slim anthology, the title of which was intended to provoke the Director of Public Prosecutions into either shitting or getting off the pot.

I'm afraid you can't buy a copy of the Glorifying Terrorism SF anthology (it's out of print, and not going to be reprinted or published as an ebook any time soon, because of the ongoing VATMESS headache). But ... the majestic organs of the state took one look at it and said "na na I can't hear you, not going there, you can't make me, I'd look like a tool". A few years later the "Glorifying Terrorism" charge was quietly written out of the statute books. And I'd like to think we had something to do with it.

Which brings me to the topic of the very short short story below, which now exists in a kind of counterfactual limbo, an alternate history where the financial crash of 2007/08 never happened, Tony Blair kept on getting worse, the "Glorifying Terrorism" offense stayed on the books, and UKIP never happened. Instead, the BNP—the knuckle-dragging neo-fascists who UKIP have largely supplanted— somehow parlayed an unspecified terorism-related crisis into a rise to government, and then the inevitable reductio ad absurdum ensued:

(See if you can figure out who I cribbed the declaration from?)




MINUTES OF THE LABOUR PARTY CONFERENCE, 2016

PREAMBLE TO THE MINUTES OF THE LABOUR PARTY CONFERENCE, 2016

Greetings from the National Executive.

Before reading any further, please refer to the Security Note and ensure that your receipt and use of this document is in compliance with Party security policies. If you have any doubts at all, burn this document immediately.




SECURITY NOTE




This is an official Labour Party Document. Possession of all such documents is a specific offense under (2)(2)(f) of the Terrorism Act (2006). Amendments passed by the current government using the powers granted in the Legislative and Regulatory Reform Act (2006) have raised the minimum penalty for possession to 10 years imprisonment. In addition, persons suspected of membership of or sympathy for the Labour Party are liable for arrest and sentencing as subversives under the Defence of the Realm Act (2014).

You must destroy this document immediately, for your own safety, if:

You have any cause to suspect that a neighbour or member of your household may be an informer,

You have come into possession of this document via a suspect source, or if your copy of this document exhibits signs of having been printed on any type of computer printer or photocopier, or if you received this document in a public place that might be overseen by cameras, or if it may have been transmitted via electronic means.

The Party would be grateful if you can reproduce and distribute this document to sympathizers and members. Use only a typewriter, embossing print set, mimeograph, or photographic film to distribute this document. Paper should be purchased anonymously and microwaved for at least 30 seconds prior to use to destroy RFID tags. Do not, under any circumstances, enter or copy the text in a computer, word processor, photocopier, scanner, mobile phone, or digital camera. This is for your personal safety.




MINUTES OF THE LABOUR PARTY CONFERENCE, 2016

1. Apologies for absence were made on behalf of the following:

Deputy Leader, Hillary Benn (executed by junta)

Government, Douglas Alexander (executed by junta)

Government, Kate Hoey (detained, Dartmoor concentration camp)

EPLP Leader, Mohammed Sarwar (executed by junta)

Young Labour, Judy Mallaber (detained, Dartmoor concentration camp: show trial announced by junta)

...

2. Motions from the national executive:

1) In the light of the government's use of its powers of extradition under the US/UK Extradition Treaty (2005), and their demonstrated willingness to lie to the rest of the world about their treatment of extradited dissidents, it is no longer safe to maintain a public list of shadow ministers and party officers. With the exception of the offices of Party Spokesperson and designated Party Security Spokesperson, it is moved that:

Open election of members of the National Executive shall be suspended,

Publication of the names and identities of members of the National Executive shall be suspended,

The National Executive will continue to function on a provisional basis making ad-hoc appointments by internal majority vote to replace members as they retire, are forced into exile, or are murdered by the junta;

From now until the end of the State of Emergency and the removal of the current government, at which time an extraordinary Party Conference shall be held to publicly elect a peacetime National Executive.

(Carried unanimously.)

2) In view of the current government's:

  • suspension of the Human Rights Act (1998), Race Relations Act (2000), and other Acts,

  • abrogation of the Treaty of Europe and secession from the European Union,

  • amendment via administrative order of other Acts of Parliament (including the reintroduction of capital punishment),

  • effective criminalization of political opposition by proscribing opposition parties as "organisations that promote terrorism" under the terms of the Terrorism Act (2000),

  • establishment of concentration camps and deportation facilities for ethnic minorities, political dissidents, lesbian, gay, bisexual and transgendered citizens, and others,

  • deployment of riot police and informal militias against peaceful demonstrations and sit-ins, with concomitant loss of life,

  • and their effective termination of the democratic processes by which the United Kingdom has historically been governed,

We find, with reluctance, that no avenue of peaceful dissent remains open to us. We are therefore faced with a choice between accepting defeat, and continuing the struggle for freedom and democracy by other means.

We shall not submit to the dictatorship of the current government, and we have no choice but to hit back by all means within our power in defence of our people, our future and our freedom. The government has interpreted the peacefulness of the movement as weakness; our non-violent policies have been taken as a green light for government violence. Refusal to resort to force has been interpreted by the government as an invitation to use armed force against the people without any fear of reprisals. It is therefore moved that:

A National Resistance Movement is created. The Movement will seek to achieve liberation without bloodshed or violence if possible. We hope—even at this late moment—that the government will come to its senses and permit a free and fair general election to be held in which parties representing all ideologies will be permitted to stand for election. But we will defend our supporters and the oppressed against military rule, racist tyranny, and totalitarianism, and we will not flinch from using any tool in pursuit of this goal.

The Movement will work to achieve the political goals of the Labour Party during the state of emergency, and will cooperate willingly with other organizations upon the basis of shared goals.

The Movement will actively attack the instruments of state terror and coercion, including functionaries of the government who enforce unjust and oppressive laws against the people.

At the cessation of the struggle, a National Peace and Reconciliation Commission shall be established and an amnesty granted to members of the Movement for actions taken in the pursuit of legitimate orders.

In these actions, we are working in the best interests of all the people of this country - of every ethnicity, gender, and class - whose future happiness and well-being cannot be attained without the overthrow of the Fascist government, the abolition of white supremacy and the winning of liberty, democracy and full national rights and equality for all the people of this country.

(Carried 25/0, 3 abstentions)

3) All Party members who are physically and mentally fit to withstand the rigours of the struggle are encouraged to organize themselves in cells of 3-6 individuals, to establish lines of communication (subject to the Party security policies), and to place themselves at the disposal of the National Resistance Movement. Party members who are unable to serve may still provide aid, shelter, and funds for those who fight in our defence.

(Carried unanimously)

3. Motions from the floor

The party recognizes that that our own legislative program of the late 1990s and early 2000s established the framework for repression which is now being used to ruthlessly suppress dissent. We recognize that our neglect of the machinery of public choice in favour of the pursuit of corporatist collaborations permitted the decay of local and parliamentary democracy that allowed the British National Party to seize power with the support of no more than 22% of the electorate. We are therefore compelled to admit our responsibility. We created this situation; we must therefore repair it.

Never again shall the Labour Party place national security ahead of individual freedoms and human rights in its legislative program. It is therefore moved that the following quotation from Benjamin Franklin be inserted between Clause Three and the current Clause Four of the Party Constitution:

"They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety."

(Carried 16/12)





May 16, 2015

Electrical noise on the temperature sensors by Goatchurch

Last year I discovered interference issues with the barometric sensor, which I solved by stashing the sensor on a separate board with its own microcontroller and power supply and communicating back via an optocoupler in a way no electrical engineer would find acceptable.

Now I have finally got an understanding of the noise visible on my accurate temperature sensing system based on an analog TMP36 wired to a 16 bit analog-to-digital converter.

Those fridge temperature samples were smoothed by filtering for the maximum value across a sample window, owing to the observation that all the errors occurred downwards.

I had thought that this was an artifact of the ADC device where it would tend to get the final bits of the conversion undervalues.

But it’s clearly due to various other sensor devices switching on briefly many times a second, causing a voltage drop as they draw their power, and my ADC detecting these fluctuations rather than any changes in temperature.

The experiment which isolated this effect was a hack to the main loop to make it switch off or on sets of devices every 30 seconds, like so:

void WholeSdOledBle::LoopWhole()
{
  int mstamp = millis(); 
  int icyc = (mstamp/1000/30+5) % 8; 
  if (adctmp36)       FetchADCtmp36(100, 80); 
  if (icyc>=2)  {
    if (jeenodeserial3) FetchJeenodeserial3(); 
    if (consoleserial)  FetchConsoleSerial(); 
    if (BTLEserial)     FetchBLE(); 
    if (gpsdata)        FetchGPS(); 
    if (wr)             FetchWind(); 
    if (lightsensor)    FetchLight(); 
    if (baroreceiver)   FetchBaro(); 
    if (dst)            FetchDallas(); 
  }
  if (icyc>=3) {
    if (irthermometer)  FetchIRtemp(200, 30);
    if (compassdata)    FetchCompass(); 
  }
  if (icyc>=4) 
    if (gyrodata)       FetchGyro(); 
  if (icyc>=5) 
    if (humiditydata)   FetchHumidity(); 
  if (icyc>=6) 
    if (tbarometer)     FetchTBarometer(); 
}

Here is the graph of my temperature readings over part of this window. See how my yellow line of accurate readings (at the precision of 0.003degC) gains a 0.5degC of noise when I switch all the sensors on.

adcnoise

This is going to be of no use if I’m trying to detect the subtle temperature change due to flying through a thermal.

It’s important to note that I have been ignoring this obvious effect until now, and that it would be almost invisible had I been sampling at a more “normal” rate of once every 15 seconds, as you would if you intended your device to go somewhere boring, like on a lamp-post or in the home. Although 0.5degrees is a substantial fluctuation, it’s ostensibly within the tolerance of the device, and so would be likely to be ignored by the incurious who just want to get the data out there and onto the internet, wherein people would download the data and potentially waste their time isolating these erroneous signals caused by the instrument rather than the environment. And, not having access to the devices, they would be unable to conduct the necessary experiments to reveal this effect, even if they became suspicious.

It’s exactly this kind of applying the sensors in more interesting places that’s going to feed back to the technology as it is used in more boring scenarios — an argument that was lost on the POC21 project team which I applied to three weeks ago on a whim.

The content of my failed application is as follows:

Proposal: With the equipment on the market today it’s very easy to make the hardware for an amazing sensor data gathering project, and be convinced that once the streams of measurements have been stored into a massive database, then there will be something which we can do with the data to make a difference.

There never is. No such software or expertise exists. All you have is flat data, maybe some graphing capability, and a system to download the data so that someone else can do something interesting with it if they can think of an idea.

In the end, when you take a critical view of most of these projects, what has really happened is that the team discovered it was too hard and too boring to do something with the data, and they lost interest and pretended that it wasn’t a problem.

http://www.freesteel.co.uk/wpblog/2014/10/21/the-exponential-decay-curves-are-nice-shame-about-the-theory/

I believe I have a platform that can generate sensor data which is sufficiently interesting that I will work out the techniques for handling this type of data and turning it into actual actions, rather than simply some pretty graphs.

My platform is my hang-glider, and I would like to learn how to fly it better.

Premature optimization is the root of all evil.

Premature application is the root of all boredom and failure.

It’s got to be fun.

Background: In 2008 I co-founded a data company called ScraperWiki in Liverpool. This is normally working with government administrative data.

More recently, my projects in dynamic environmental data have included Housahedron, based on installing 100 temperature sensors in a house, knowing its internal polyhedral volume, and being able to see in real time the cold air drafting through the front door and around the house, or the heat convecting off the radiators:
https://web.archive.org/web/20140720195335/http://www.housahedron.co.uk/

Housahedron was accepted into the Berlin Startup Bootcamp 2014:
http://www.freesteel.co.uk/wpblog/2014/10/08/washed-up-in-berlin/

Then I investigated the potential of a very cheap device you could put in your fridge (cost less than a pint of milk, and be smaller) that measured its duty cycle over time and gave it a quality rating out of 100 depending on its energy efficiency. With such ratings, it would be possible to target the replacement of the lowest rated fridges across the country and make the biggest difference.
http://www.freesteel.co.uk/wpblog/2014/11/21/fridge-temperature-filtering/

But this got too difficult when I discovered how much temperature variation there was in the interior shell, and that I’d have to spend years pushing this idea around till someone could sponsor it.

And anyway, there was only one action that would result from all the data gathering — a decision to purchase a new fridge and at what time.

So now I’m doing a really, really interesting cool sensor data project. And maybe this time I will push it through to some sort of conclusion where, yes, the data gets converted into action.

These developments, and any software and techniques, will apply to all other energy monitoring data projects — including any projects which are on the Program. I can supply teaching, skills and hard questions to any other teams who needs it.

See links for some references to on-going work in data gathering and analysis. Sensors I have are:

Geometric: GPS, compass, accelerometer, gyro (to be replaced with a single Bosch BNO055 device), highly accurate barometer, windspeed meter, wind-flow stall front detector

Atmospheric: 4 different temperature devices, humidity sensor, barometer, wing temperature infra-red sensor

Experiments include:
(a) determining the vortex rotation of a thermal
(b) discovering the temperature change when crossing the threshold of a thermal
(c) calculating the polar curve of glide angle vs air speed
(d) calculating the bank angle vs turning radius
(e) relating the temperature lapse rate in the atmospheric column and comparing with thermal strength
(f) estimating cloud base height from humidity, temperature and estimated lapse rate

If you are worried that you have too many teams who have great ideas to deploy these amazing data sensors in houses, in heat storage systems, and so on, but that they might get lost when it comes to time actually do something with the data, you need me on the Program.

Meanwhile, I spent last Wednesday nailed to the hill of Penmaenbach above Conwy in a light sea breeze dodging one other glider. Unfortunately the SD card was disconnected, so I didn’t get any data.

penmaen

At least I got some nice pictures, though.


How do M-dwarf Stars Measure Up? by Astrobites

  • Title: A 0.24 + 0.18 M double-lined eclipsing binary from the HATSouth survey
  • Authors: G. Zhou, D. Bayliss, J. D. Hartman, M. Rabus, G. Á. Bakos, A. Jordán, R. Brahm, K. Penev, Z. Csubry, L. Mancini, N. Espinoza, M. de Val-Borro, W. Bhatti, S. Ciceri, T. Henning, B. Schmidt, S. J. Murphy, R. P. Butler, P. Arriagada, S. Shectman, J. Crane, I. Thompson, V. Suc, R. W. Noyes
  • First Author’s Institution: Research School of Astronomy and Astrophysics, Australian National University, Canberra
  • Paper Status: Accepted for publication in MNRAS

What’s faint, red, and all over? M-dwarfs, of course! These cool, low-mass stars litter our galaxy, and an M-dwarf star’s lifetime is longer than the Universe is old. But despite their strength in numbers and their longevity, M-dwarfs are difficult to track down and we know relatively little about them. They are just too dim.

Today’s paper is helping to change that for two M-dwarfs at a time. The figure below shows the discovery light curve of the eclipsing binary HATS551-027: two very low-mass stars orbiting each other every 4.1 days. This is only the third system like this we have found that lets us study both stars in great detail.

words

Discovery light curve of a new pair of very low-mass M-dwarfs in an eclipsing binary. These observations of HATS551-027 is from the HATSouth survey. The authors of today’s paper use this light curve together with lots of follow-up observations to characterize the pair of stars as robustly as possible.

Very low-mass M-dwarfs (less than a third as massive as our Sun) are a missing link in our theory of stellar interiors. Current theories suggest that stars this small have fully convective interiors. In other words, the only way energy is transported inside the star is through convection, not radiation. In contrast, radiation takes over in the cores of more massive stars, including the Sun. Modeling a fully convective star is tricky! We don’t have a complete understanding of how it affects global properties like radius or temperature. It’s important to get right, if for no other reason because lots of exoplanets orbit M-dwarfs, and you can never hope to learn the secrets of a distant planet if you don’t understand its host star.

Shortly after HATS551-027 was discovered, the authors of today’s paper began an ambitious suite of follow-up observations. They obtained images with two additional telescopes and spectra from three. As any observer can tell you, combining data from five different configurations of telescope + instrument into a coherent picture takes a lot of work and double-checking for consistency. But the great thing about eclipsing binaries is you can use observations like these to model star brightness and motion and measure important properties like mass and radius.

In the end, the authors find that both stars are cooler and somewhat larger than theoretical models predict. They report the cool temperatures are consistent with other very low-mass M-dwarfs measured previously. A comparison of HATS551-027’s masses, radii, and temperatures compared to the only similar eclipsing binaries known and four theoretical models is shown below.

words

How do M-dwarfs measure up in radius (top) and temperature (bottom)? The stars of today’s paper, HATS551-027A&B, are plotted in red, and four other very low-mass stars in eclipsing binaries are plotted in gray. Theoretical models are plotted as different color lines for comparison.

One other interesting feature of this binary is emission in the Hydrogen-alpha line in both stars’ spectra. Most stellar spectra show purely absorption lines, but many M-dwarfs do have Hydrogen emission. Previous studies have shown this to be an indicator of stellar activity. This makes sense here because the out-of-eclipse brightness varies significantly, which points to star spots. Some have even suggested that Hydrogen emission may correlate with M-dwarf radii being larger and/or cooler than models predict. However, it is hard to say for sure with just a handful of well-studied M-dwarfs. The hunt for more eclipsing binaries with very low-mass stars continues.


Unseen latitudes of comet Churyumov-Gerasimenko -- revealed! by The Planetary Society

A recent Rosetta image has revealed a good part of the comet's previously hidden southern terrain to the public for the first time.


May 15, 2015

On 61 different forms of gameplay by Simon Wardley

Over the next year, through a series of 61 posts, I'm going to go through in some detail various forms of gameplay that can be used when you have a map of a business environment

I need to emphasise Sun Tzu's five factors in competition - purpose, climate, landscape, leadership and doctrine - however unfortunately most ignore climate & landscape (a bit like trying to use Boyd's OODA loop but ignoring the observe & orientate bit). The problem with this is that whilst the game plays can help you manipulate the landscape, if you can't see the environment then they can be downright dangerous. It's always a good idea to look where you're shooting before you fire the rifle.

In normal circumstances, you will use your map with multiple of these game plays in concert. Of course, you'll need to have used your map to determine your direction of travel and where you wish to attack first. This is often an iterative process itself known as scenario planning.

The complete set of plays that I'll be covering are provided in figure 1. NB, I've shaded the plays according to how 'evil' or 'good' they are using AD&D terminology. I've provided some basic summary descriptions below and as I post the details then I'll add links to make it easy to navigate.

Figure 1 - The plays.



Basic Operations
These forms are about improving the organisation itself.
  • Focus on user needs
    A key aspect of mapping is to focus the organisation on user needs rather than internal needs. Often this process enables unmet needs (opportunities) to be discovered and friction to be removed from the process of dealing with customers.
  • Situational Awareness
    The act of mapping tends to remove alignment issues between business, IT and other groups by providing a common language. It enables the development of a common purpose and empowers groups to take advantage of their part of the map whilst understanding the whole.
  • Effective & Efficient
    Removal of bias and duplication within an organisation along with the use of appropriate methods for management and purchasing. Do not underestimate the potential savings possible, cost reductions of 90-95% are not uncommon.
  • Structure & Culture
    Implementation of cell based & PST structures along with multiple cultures to deal with aptitude and attitude. Both autonomy and mastery can be enabled by these forms of structure and they avoid the silos and inertia created by traditional structures.
  • Optimising Flow
    Risk, performance, information and financial flow can be analysed and improved through mapping. This is necessary for increasing margin, removing friction and increasing speed.
    • Channel Conflict
      Exploiting new channels and conflict within existing channels to create favourable terms.

        User Perception
        These forms are about influencing the end user view of the world.
        • Education
          Overcoming user inertia to a change through education. There are 16 different forms of inertia and many can be overcome directly with education. Don't underestimate this.
        • Bundling
          Hiding a disadvantageous change by bundling the change with other needs.
        • Creating artificial needs
          Creating and elevating an artificial need through marketing and behavioural influence. Take a rock and make it a pet etc.
        • Confusion of Choice
          Preventing users from making rational decisions by overwhelming them with choice.
        • FUD
          Creating fear, uncertainty and doubt over a change in order to slow it down.
        • Artificial competition
          Creating two competing bodies to become the focus of competition and in effect driving oxygen out of a market.
        • Lobbying
          Persuading Government of a favourable position.

        Accelerators
        These enable you to accelerate the process of evolution.
        • Market Enablement
          Encouraging the development of competition in a market
        • Open Approaches
          Encouraging competition through open source, open data, open APIs, open processes by removing barriers to adoption and encouraging a focus for competition.
        • Exploiting Network Effects
          Techniques which increases the marginal value of something with increased number of users.
        • Co-operation
          Working with others. Sounds easy, actually it's not.
        • Industrial Policy
          Government investment in a field.

        Deaccelerators
        These enable you to slow down the process of evolution
        • Exploiting existing constraints
          Finding a constraint and reinforcing it through supply or demand manipulation.
        • Patents & IPR
          Preventing competitors from developing a space including ring fencing a competitor.
        • Creating constraints
          Supply chain manipulation with a view of creating a new constraint where none existed.
        • Limitation of competition
          Through regulatory or other means including erecting barriers to prevent or limit competitors.

        Dealing with toxicity
        Elements of your value chain will be irrelevant with evolution over time, there's numerous ways of dealing with this especially as the inertia created can become toxic.
        • Disposal of liability
          Overcoming the internal inertia to disposal. Your own organisation is likely to fight you even when you're trying to get rid of the toxic.
        • Sweat & Dump
          Exploiting a 3rd party to take over operating the toxic asset whilst you prepare to remove yourself.
        • Pig in a poke
          Creating a situation where others believe the toxic asset has long term value and disposing of it through sale before the toxicity reveals itself.

        Market
        Standard ways of playing in the market
        • Differentiation
          Creating a visible difference through user needs.
        • Pricing policy
          Exploiting supply and demand effects including price elasticity, Jevons paradox and constraints including fragmentation plays.
        • Exploiting buyer / supplier power
          Creating a position of strength for yourself.
        • Harvesting
          Allowing others to develop upon your offerings and harvesting those that are successful. Techniques for ensuring harvesting creates positive signals rather than creating an environment others avoid.
        • Standards game
          Driving a market to a standard to create a cost of transition for others or remove the ability of others to differentiate.
        • Signal distortion
          Exploiting commonly used signals in the market by manipulation of analysts to create a perception of change.

        Defensive
        Standard ways of protecting your market position
        • Threat acquisition
          Buying up those companies that may threaten your market.
        • Raising barriers to entry
          Increasing expectations within a market for a range of user needs to be met in order to prevent others entering the market.
        • Procrastination
          Do nothing and allowing competition to drive a system to a more evolved form.
        • Defensive regulation
          Using Government's to create protection for your market and slow down competitors.

        Attacking
        Standard ways of attacking a market change
        • Directed investment
          VC approach to a specific or identified future change.
        • Experimentation
          Use of specialists groups, hackdays and other mechanisms of experimentation.
        • Creating centres of gravity
          Creating a focus of talent to encourage a market focus on your organisation.
        • Undermining barriers to entry
          Identifying a barrier to entry into a market and reducing it to encourage competition.
        • Fool's mate
          Using a constraint to force industrialisation of a higher order system.

        Ecosystem
        Using others to help achieve your goals.
        • Alliances
          Working with other companies to drive evolution of a specific activity, practice or data set.
        • Co-creation
          Working with end users to drive evolution of a specific activity, practice or data set.
        • Sensing Engines (ILC)
          Using consumption data to detect future success.
        • Tower and Moat
          Dominating a future position and prevent future competitors from creating any differential.
        • Two factor
          Bringing together consumers and producers and exploiting the relationship between them.
        • Co-opting
          Copying competitors move and undermining any ecosystem advantage by interrupting data flows.
        • Embrace & Extend
          Capturing an existing ecosystem.

        Competitor
        Dealing with the opposition if you can't work with them
        • Tech Drops
          Creating a 'follow me' situation and dropping large technology changes onto the market.
        • Fragmentation
          Exploiting pricing effects, constraints and co-opting to fragment a competitor's market.
        • Reinforcing inertia
          Identifying inertia within a competitor and forcing market changes that reinforce this.
        • Sapping
          Opening up multiple fronts on a competitor to weaken their ability to react.
        • Misdirection
          Sending false signals to competitors or future competitors including investment focused on the wrong direction.
        • Restriction
          Limiting a competitors ability to adapt.
        • Talent Raid
          Removing core talent from a competitor either directly or indirectly.

        Positional
        General forms of playing with the future market
        • Land grab
          Identifying and position a company to capture a future market space.
        • First mover
          Exploiting first mover advantage especially with industrialisation to component services.
        • Fast follower
          Exploiting fast follower advantage into uncharted spaces.
        • Weak Signal
          Use of common economic patterns to identify where and when to attack.

        Poison
        General forms of preventing others playing with the future market. If you can't capture then poison it.
        • Licensing
          Use of licensing to prevent future competitor moves.
        • Insertion
          Either through talent or misdirection, encouraging false moves in a competitor.
        • Design to fail
          Removing potential future threats by poisoning a market space before anyone attempts to establish it.

        As mentioned above, the techniques are normally used in combination plays e.g. you might be a first mover to industrialise a specific component and use this to establish an ILC type ecosystem whilst exploiting competitors' inertia and misdirecting any would be threats.

        These are not the full list of plays but the ones that I think it's reasonable I can cover over a period of one year. There are some other plays but I'll leave that to a future date.

        Again, I cannot emphasise the importance of situational awareness before using some of these plays. It is trivially easy to create a disaster e.g. being a first mover to try and build an ILC ecosystem around a relatively novel activity and harvesting aggressively. This will just end up destroying your ability to create an ecosystem leaving you with a bad reputation in a field which might at a later date become suitable for the same sort of play. Some of the plays are also downright evil, so be warned. Understand the environment, learn to play the game and build with experience.


        Blog all the things... by Dan Catt

        ...well that's the theory anyway.

        I've gotten a whole long list of posts I've been meaning to write forever. I end up thinking that I just need to find the time to write one. But writing takes me forever and so that time never comes.

        Then, one by one they slowly go out of date. Or at least to me they seem out of date. When I've been thinking about writing a blogpost about 3D printing for about 3 years now, I'm not sure if what I was going to write then even makes sense now.

        But!

        But, I've decided just to write the damn things anyway, no matter how stupidly irrelevant they are now. I can only apologise in advance... of course they're still going to take me forever to write.

        A following incomplete list of stuff I've so far failed to write about.

        • Code writing styles when publishing on GitHub
        • Horror films for kids
        • The Politics of code & the Guardian API
        • Modern Trepanning: Things that can make you feel old
        • Something about Sigue Sigue Sputnik+ The work I'm doing on Contributoria and why it's awesome
        • Books the kids like
        • Discovering I have Asbergers and what it does and doesn't mean
        • Sentiment analysis of Guardian comments over time by section
        • How I discovered Discordia and what I did when I found her
        • Not speaking at conferences
        • How I ended up speaking at that one conference
        • Data visualisation for newspaper covers
        • 1 Year of Freelancing (it's now 3 years)
        • Why I number my projects
        • On learning Node.js and other past failed languages
        • BBC Micro graphics, and coding in tight spaces
        • 3D printing Artisanal Integers
        • How I make the podcasts I make
        • Fooling smart advertising boards in Manchester with balloons
        • 3D filiment maker, and slow composting
        • Why we have solar panels
        • The Zombie game I still haven't finished
        • The importance of innovation in work
        • Falling out of love with TV, The Adventure Game & Noel Edmonds
        • Passive games
        • Some thoughts about the text adventure game Castle of Riddles
        • Moving away from gmail
        • On not being influenced by what other people are doing


        We Love A Crowd! by The Planetary Society

        This month, at the same time that The Planetary Society is launching the long-anticipated LightSail prototype for a shakedown cruise, we are excited to launch another “first”—our first-ever Kickstarter campaign.


        May 14, 2015

        Forming Rings in a Protoplanetary Disk by Astrobites

        Detecting planet-forming gaps in a disk

        141105_ALMA_HL_01

        Figure 1: An ALMA radio image of the protoplanetary disk around the young star HL Tau. The circular disk appears elliptical because it is inclined with respect to our line of sight. With ALMA’s high resolution capabilities, the image shows dark gaps in the disk, which are formed by protoplanets sweeping out the dust around their orbits. Today’s paper attempts to explain why the planets are forming at those particular positions in the disk. Figure from ALMA and NRAO.

        One of the primary goals in the creation of the ALMA radio array was to find evidence of actively-forming planets in protoplanetary disks. Last Fall, as part of ALMA’s testing and verification procedures, the telescope observed the protoplanetary disk around HL Tau, a young Sun-like star about 140 parsecs (450 light-years) away. The exquisite resolution allows us to see bright rings in the disk, separated by dark gaps (see Fig. 1). These gaps form when small protoplanets become large enough to start sweeping up the dust and gas around them, leaving dark tracks along their orbits where the dust levels are low.

        Forming protoplanets: why ice is nice

        In order to form a protoplanet, tiny dust grains (carbon or silicate-based molecules a few microns across) in the disk must first collide and stick. They form larger and larger bodies as they collide through random interactions. Finally, one is large and massive enough to start sweeping up material around it through gravitational attraction. How fast this process occurs depends on how easily the particles stick together, which in turn depends on what they’re made of. Bare “rocky” grains have a hard time sticking together after a collision. But lab experiments show that “icy” grains (those on which water or other ice has condensed) are much stickier and less likely to fragment later. Therefore, protoplanet formation is thought to be more efficient in regions where ices are able to condense onto grains.

        Now that astronomers have evidence for forming protoplanets in HL Tau, this model can be put to a direct test. The authors of today’s paper show that the locations of the protoplanetary gaps in HL Tau are to be expected from the condensation points of common ices in the disk.

        Comparing observations to the model

        The authors consider a long list of possible ice species, including water, carbon-monoxide and -dioxide, methane, and ammonia. They assume abundances of each ice based on Solar System comets, which are though to reflect the original composition of the Sun’s protoplanetary disk. Each type of ice has a condensation temperature at which it should freeze-out from gas phase onto grain surfaces, given a reasonable estimate of the pressure in the disk. The temperature in the disk decreases with radius from the star, so for each ice there is a radius where the disk becomes cold enough to condense the ice. This radius, for that species of ice, is known as the species’ snow line. In addition to making grain-sticking easier, the snow line should create a pressure bump, helping to trap materials at that position and make it even easier to form planets.

        Figure 2: The black curve shows the intensity (in “brightness temperature” units) of the disk, as a function of radius. Dips at 13, 32, and 63 AU show the location of the gaps. The red curve is a model of the temperature in the disk, and colored bands show where the listed ices should form snow lines based on that model. The predicted snow lines agree well with the positions of the gaps, suggesting that ice condensation has been responsible for accelerating the planet-formation process at those locations. Figure 2 from Zhang et al. 2015.

        After taking into account a model for the temperature in the HL Tau disk, the authors can derive the expected location of the snow line for each ice they consider. The authors then analyze the ALMA images and derive distances from the center of the disk to each of the three major gaps. They find radii of 13, 32, and 63 AU, using the established distance to HL Tau of 140 parsecs. These locations agree very well (see Fig. 2) with the expected snow lines for water, ammonia, and carbon-dioxide, respectively, which are thought to be the three most abundant ices in the disk. The HL Tau observations, therefore, are a strong confirmation of this model of ice-aided planet formation.

        Studying the dust in the gaps

        Using images of the disk in several different frequency bands, the authors also hope to constrain information about the dust from which the protoplanets are forming. If, indeed, the snow lines have helped dust grains grow to larger sizes in the gaps, then the properties of the thermal emission (which comes from the warm dust) should differ in the gaps than in the rest of the disk.

        By taking the ratio of the flux in one band to another, the authors derive a spectral index (α) at each position in the disk. The spectral index measures how much the intensity increases at higher frequencies (smaller wavelengths). The index can be used to estimate the maximum size of dust grains in the emitting region.

        In the main disk, α=2 throughout, while α increases to 2.5 in the gaps. A spectral index of α=2 should mean that the dust grains in the disk are very large, above centimeter sizes. However finding α>2 in the gaps implies that the dust grains are smaller in the gaps than in the rest of the disk. This flies directly in the face of the idea that the gaps show the location of increased dust growth!

        alpha

        Figure 3: The spectral index as a function of radius in the disk. An index α=2 in between the gaps likely implies very large (~1 cm) dust grains. But since α>2 in the gaps, this seems to imply that dust grains are smaller in the planet-forming gaps. The authors concoct a two-component dust model to explain the apparent contradiction. Fig. 3f from Zhang et al. 2015

        What’s wrong with the model?

        In order to explain this conundrum, the authors come up with the idea of two populations of dust particles. One is made of small, rocky grains, which alone would result in a spectral index of α>2. The second population is made of much larger, icy grains, which emit at α=2. In the disk, second-population grains of centimeter sizes would dominate the emission, resulting in α=2. But in the gaps, the second population grains could have grown even larger (to decimeter scales). They would be large, but few and far-between, so most of the emission in the gaps would come again from the α>2 first population.

        This extrapolated model will need much more observational confirmation in order to become accepted. The authors may suffer from over-eagerness to fit the observations, rather than admitting the (exciting) possibility of a conflict. Perhaps the dust particles in the gaps really are smaller, because the larger particles have already been swept up by the protoplanets? We could be learning new and unexpected things about planet formation from this possible conflict.

        Regardless, the authors have clearly shown that the first observed planet-forming gaps in a disk match well the expected locations, assuming the onset of snow lines provokes increased planet formation. As ALMA continues to observe protoplanetary disks, we should soon find more examples of ongoing planet formation, which will help confirm or discard this new two-population model of dust grains. This excellent new facility, in full-science mode for only a few months now, has already begun to push the boundaries of our understanding of how planets form. Look forward to more exciting results in the years to come!


        The Ultimate Tech Frontier: Your Brain by Charlie Stross

        Ramez Naam is the author of 5 books, including the award-winning Nexus trilogy of sci-fi novels. Follow him on twitter: @ramez. A shorter version of this article first appeared at TechCrunch.

        The final frontier of digital technology is integrating into your own brain. DARPA wants to go there. Scientists want to go there. Entrepreneurs want to go there. And increasingly, it looks like it's possible.

        You've probably read bits and pieces about brain implants and prostheses. Let me give you the big picture.

        Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy -- sharing what we see, hear, touch, and even perhaps what we think and feel with others.

        Arkady flicked the virtual layer back on. Lightning sparkled around the dancers on stage again, electricity flashed from the DJ booth, silver waves crashed onto the beach. A wind that wasn't real blew against his neck. And up there, he could see the dragon flapping its wings, turning, coming around for another pass. He could feel the air move, just like he'd felt the heat of the dragon's breath before.

        - Adapted from Crux, book 2 of the Nexus Trilogy.

        Sound crazy? It is... and it's not.

        Start with motion. In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. DARPA has now used the same technology to put a paralyzed woman in direct mental control of an F-35 simulator. And in animals, the technology has been used in the opposite direction, directly inputting touch into the brain.

        Or consider vision. For more than a year now, we've had FDA-approved bionic eyes that restore vision via a chip implanted on the retina. More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we're looking at. (They'd do even better with implants in the brain.)

        Sound, we've been dealing with for decades, sending it into the nervous system through cochlear implants. Recently, children born deaf and without an auditory nerve have had sound sent electronically straight into their brains.

        Nexus

        In rats, we've restored damaged memories via a 'hippocampus chip' implanted in the brain. Human trials are starting this year. Now, you say your memory is just fine? Well, in rats, this chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on. Sounds useful.

        In monkeys, we've done better, using a brain implant to "boost monkey IQ" in pattern matching tests.

        We've even emailed verbal thoughts back and forth from person to person.

        Now, let me be clear. All of these systems, for lack of a better word, suck. They're crude. They're clunky. They're low resolution. That is, most fundamentally, because they have such low-bandwidth connections to the human brain. Your brain has roughly 100 billion neurons and 100 trillion neural connections, or synapses. An iPhone 6's A8 chip has 2 billion transistors. (Though, let's be clear, a transistor is not anywhere near the complexity of a single synapse in the brain.)

        The highest bandwidth neural interface ever placed into a human brain, on the other hand, had just 256 electrodes. Most don't even have that.

        The second barrier to brain interfaces is that getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That's a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who've been paralyzed or suffered brain damage.

        This is not yet the iPhone era of brain implants. We're in the DOS era, if not even further back.

        But what if? What if, at some point, technology gives us high-bandwidth neural interfaces that can be easily implanted? Imagine the scope of software that could interface directly with your senses and all the functions of your mind:

        They gave Rangan a pointer to their catalog of thousands of brain-loaded Nexus apps. Network games, augmented reality systems, photo and video and audio tools that tweaked data acquired from your eyes and ears, face recognizers, memory supplementers that gave you little bits of extra info when you looked at something or someone, sex apps (a huge library of those alone), virtual drugs that simulated just about everything he'd ever tried, sober-up apps, focus apps, multi-tasking apps, sleep apps, stim apps, even digital currencies that people had adapted to run exclusively inside the brain.

        - An excerpt from Apex, book 3 of the Nexus Trilogy.

        The implications of mature neurotechnology are sweeping. Neural interfaces could help tremendously with mental health and neurological disease. Pharmaceuticals enter the brain and then spread out randomly, hitting whatever receptor they work on all across your brain. Neural interfaces, by contrast, can stimulate just one area at a time, can be tuned in real-time, and can carry information out about what's happening.

        We've already seen that deep brain stimulators can do amazing things for patients with Parkinson's. The same technology is on trial for untreatable depression, OCD, and anorexia. And we know that stimulating the right centers in the brain can induce sleep or alertness, hunger or satiation, ease or stimulation, as quick as the flip of a switch. Or, if you're running code, on a schedule. (Siri: Put me to sleep until 7:30, high priority interruptions only. And let's get hungry for lunch around noon. Turn down the sugar cravings, though.)

        Crux Implants that help repair brain damage are also a gateway to devices that improve brain function. Think about the "hippocampus chip" that repairs the ability of rats to learn. Building such a chip for humans is going to teach us an incredible amount about how human memory functions. And in doing so, we're likely to gain the ability to improve human memory, to speed the rate at which people can learn things, even to save memories offline and relive them -- just as we have for the rat.

        That has huge societal implications. Boosting how fast people can learn would accelerate innovation and economic growth around the world. It'd also give humans a new tool to keep up with the job-destroying features of ever-smarter algorithms.

        The impact goes deeper than the personal, though. Computing technology started out as number crunching. These days the biggest impact it has on society is through communication. If neural interfaces mature, we may well see the same. What if you could directly beam an image in your thoughts onto a computer screen? What if you could directly beam that to another human being? Or, across the internet, to any of the billions of human beings who might choose to tune into your mind-stream online? What if you could transmit not just images, sounds, and the like, but emotions? Intellectual concepts? All of that is likely to eventually be possible, given a high enough bandwidth connection to the brain.

        That type of communication would have a huge impact on the pace of innovation, as scientists and engineers could work more fluidly together. And it's just as likely to have a transformative effect on the public sphere, in the same way that email, blogs, and twitter have successively changed public discourse.

        Digitizing our thoughts may have some negative consequences, of course.

        With our brains online, every concern about privacy, about hacking, about surveillance from the NSA or others, would all be magnified. If thoughts are truly digital, could the right hacker spy on your thoughts? Could law enforcement get a warrant to read your thoughts? Heck, in the current environment, would law enforcement (or the NSA) even need a warrant? Could the right malicious actor even change your thoughts?

        "Focus," Ilya snapped. "Can you erase her memories of tonight? Fuzz them out?"

        "Nothing subtle," he replied. "Probably nothing very effective. And it might do some other damage along the way."

        - An excerpt from Nexus, book 1 of the Nexus Trilogy.

        The ultimate interface would bring the ultimate new set of vulnerabilities. (Even if those scary scenarios don't come true, could you imagine what spammers and advertisers would do with an interface to your neurons, if it were the least bit non-secure?)

        Everything good and bad about technology would be magnified by implanting it deep in brains. In Nexus I crash the good and bad views against each other, in a violent argument about whether such a technology should be legal. Is the risk of brain-hacking outweighed by the societal benefits of faster, deeper communication, and the ability to augment our own intelligence?

        For now, we're a long way from facing such a choice. In fiction, I can turn the neural implant into a silvery vial of nano-particles that you swallow, and which then self-assemble into circuits in your brain. In the real world, clunky electrodes implanted by brain surgery dominate, for now.

        Apex That's changing, though. Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They've shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They're working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain (which sounds rather close to the technology I describe in Nexus). And the former editor of the journal Neuron has pointed out that carbon nanotubes are so slender that a bundle of a million of them could be inserted into the blood stream and steered into the brain, giving us a nearly 10,000-fold increase in neural bandwidth, without any brain surgery at all.

        Even so, we're a long way from having such a device. We don't actually know how long it'll take to make the breakthroughs in the hardware to boost precision and remove the need for highly invasive surgery. Maybe it'll take decades. Maybe it'll take more than a century, and in that time, direct neural implants will be something that only those with a handicap or brain damage find worth the risk to reward. Or maybe the breakthroughs will come in the next ten or twenty years, and the world will change faster. DARPA is certainly pushing fast and hard.

        Will we be ready? I, for one, am enthusiastic. There'll be problems. Lots of them. There'll be policy and privacy and security and civil rights challenges. But just as we see today's digital technology of Twitter and Facebook and camera-equipped mobile phones boosting freedom around the world, and boosting the ability of people to connect to one another, I think we'll see much more positive than negative if we ever get to direct neural interfaces.

        In the meantime, I'll keep writing novels about them. Just to get us ready.


        May 13, 2015

        No Stack Startups (the Technical Side) by Albert Wenger

        This past Saturday was Video Hackday in New York City. It had been organized by the team at Ziggeo with the help from many others, including a great roster of sponsors. I enjoyed participating and wrote a little hack that you can see here. Even though I am pretty rusty as a developer, I managed to write something that let’s anyone ask a written question (which then lives at its own URL – all handled via Firebase) and others can answer by recording a video response (recording and playback from Ziggeo) which is then analyzed via Clarifai to determine if it is a person answering. If you had told me in say 2000 that all of this could be built by a single engineer in less than 8 hours I would have told you that’s completely impossible.

        So why is it possible now? Well that was the topic of the talk I gave at the kickoff for the hackday. Here are the slides that speak mostly for themselves and were inspired by my partner Andy’s post from earlier that week about the No Stack Startups.


        55 Cancri e: Now With Added Volcanoes by Astrobites

        Title: Variability in the super-Earth 55 Cnc e
        Authors: Brice-Olivier Demory, Michael Gillon, Nikku Madhusudhan, Didier Queloz
        First Author’s Institution: Astrophysics Group, Cavendish Laboratory, J.J. Thomson Avenue, Cambridge CB3 0HE, UK

        55cancrie

        Artist’s impression of 55 Cancri e, an exoplanet with a molten surface (left) occasionally covered by giant volcanic plumes (right). Image credit: NASA/JPL-Caltech/R. Hurt

         

        Of the more than 1500 exoplanets discovered over the past two decades, perhaps the most intriguing and unexpected have been the ultra-short period planets, worlds so close to their parent star that they complete an entire orbit in less than a day. Most are small, less than twice the radius of the Earth, and are so hot that their rocky crusts are being melted away. The material melting off the surface forms a cloud of debris, which could be used to investigate the composition of these mysterious worlds. Unfortunately,  most of them are too small for our current instruments to observe in detail.

        One planet that is within reach is 55 Cancri e. Roughly twice the radius of the Earth, it orbits in just 18 hours, so close to its star that the surface temperature is roughly 2400K. Its host is the brightest star currently known to have transiting exoplanets, making 55 Cancri e an ideal target for further study. Previous observations have been able to measure its mass and radius more precisely than any other rocky exoplanet. 55 Cancri e was found to be much less dense than the Earth, suggesting that it might be enveloped in a layer of super-heated water, or even be partly made of diamond. The authors of this paper have added a new phenomenon to this intriguing world: Signs of possible volcanism.

        Using the Spitzer space telescope, the authors observed the system for a total of 85 hours between 2011 and 2013. Spitzer is an infrared telescope, allowing it to see the wavelengths around 4.5μm where the reflected light from 55 Cancri e is strongest. Over the course of their campaign they were able to observe six transits of the planet across its star, as well as eight “occultations”, where 55 Cancri e passed behind the star.

        The authors first studied the transits of the exoplanet across its host star, where the amount of light blocked shows how large the planet is relative to the star. Taking their six transits together , the authors found that the radius of 55 Cancri e is around 1.92 times that of the Earth. Previous studies had returned a larger radius, at 2.17 Earth radii. The new, smaller radius means that 55 Cancri e is denser than previously thought, and no longer needs the exotic chemical compositions suggested by the earlier results.

        Along with the new radius, the transit depths also appeared to be varying slightly, although not by enough to be considered significant on their own. The occultations were a different story altogether.

        fig4a

        Combined light curve of the occultations of 55 Cancri e in 2012. The amount of light blocked suggests that 55 Cancri e had a temperature of around 1427K in 2012…

        fig4b

        …but the much bigger drop in 2013 implies a temperature of 2699K, an increase of over 1000K!

         

        As 55 Cancri passes behind the star, the light from the system suddenly changes from a combination of both the star and the reflected light from the planet’s day side, to just the light from the star. The drop in light when this occurs, shown in the plot above, reveals therefore the brightness, and hence the temperature, of the planet.

        The authors found that between 2012 and 2013 the depth of the occultation increased by a factor of 3.7—corresponding to an increase in 55 Cancri e’s temperature of over 1000K, going from 1427K to 2690K.

        What could have caused such a massive change? It wasn’t the star: 55 Cancri has been watched almost continuously by ground-based telescopes for eleven years, and shows no variations larger than a Sun-like sunspot cycle. Something had to have happened to the planet itself to cause such a large brightness change.

        The high surface temperature of 55 Cancri e suggested an explanation to the authors. With temperatures of over 1000K, most if not all of the crust would be molten rock. This could result in massive, widespread volcanism, similar to that seen on Jupiter’s moon Io.

        On 55 Cancri e, the authors suggest that a massive plume of material from a volcano could raise the photosphere up through the atmosphere. This would mean that, in the 2012 occultations, we are seeing the light from the plume in the upper, cooler parts of 55 Cancri e’s atmosphere. As the plume dissipated the light could shine through from the deeper, hotter parts of the atmosphere, resulting in the much deeper occultation in 2013.

        A volcanic plume could also explain the slight changes in transit depth, as the plume would change the thickness of 55 Cancri e’s atmosphere, blocking more light from the star as the planet passed in front of it. The authors calculate that this would require a plume somewhat larger than any observed in the Solar System, but this would not be surprising given the extreme nature of the system.

        The authors finish by noting that the presence of volcanism on 55 Cancri e raises a tantalizing opportunity. If the planet is repeatedly covered with material blasted out from its interior, then spectroscopy by the Hubble or James Webb Space Telescopes could be used to probe one of the most elusive properties of extrasolar planets: Their chemical composition.  As both one of the most extreme and most easily studied exoplanets,  future observations of 55 Cancri e will probably bring many more surprises.


        New Horizons spots Kerberos and Styx by The Planetary Society

        New Horizons has now spotted every one of Pluto's satellites...all the ones we know about, that is.


        May 12, 2015

        With Charley and C'rzz: The Divergent Universe. by Feeling Listless

        Audio  About half way through listening to this portion of Big Finish Eighth Doctor audio stories, I asked the social media what the general consensus of opinion of it and the social media answered that it's general thought of as a misstep, but Scherzo and The Natural History of Death are classics.  Having listened the whole thing again, that assessment seems fair.  As you'll see from the ensuing paragraphs written at the end of each adventure, the whole process became a bit of a trial.  I'll list a few of the flaws which I didn't fit in below in a moment, but it's worth noting that despite those, there isn't really that much difference between the strike rate in this run of episodes and the average season of most Who.  With a few notable exceptions, most average seasons of Who since the 60s have had a couple of classics, a few good but flawed stories and some total misfires.  It's even true of the imperious first couple of Eighth Doctor Big Finish seasons and certainly true of every series since the show returned.

        As with pretty much every era of Doctor Who, if they're not careful writers can find themselves banging up against a premise which has some fatal flaws.  In this case, it's that in order to justify the new universe the writers must have been asked to tell the kinds of stories they simply couldn't or wouldn't attempt under other circumstances, be experimental and in some cases this means pulling a Scherzo, The Natural History of Death or Caerdroia, which are so ludicrously different they tip over into brilliance, but in everything else there is a sense of trying to force weirdness into an otherwise trad bit of Who.  But mainly it's just that in some cases they're unpleasant to listen to with ugly sound design, some really quite boring or cliched characters talking and talking and talking in scenes which go on forever, unfunny satire and a sense of trying to follow a premise or structure which is not unlike The Keys of Marinus across too many episodes.

        The social media also signalled a dislike of C'rzz who as is so often the case when the franchise shifts out of the Doctor and his single companion structure creates a distancing effect between Eighth and Charley because they have to find something for him to do and also means there's less of a requirement for depthful secondary characters.  He's also not an especially appealing figure, oscillating between providing a Data like misunderstanding of humanity and moping around like a tragic Adric.  Plus, having been given the ability to change his colour to match his surroundings, at least in this run of stories, they don't do much with it other than helping the other characters to describe their surroundings, "C'rzz, you're as blue as these walls..." that sort of thing.  None of which should be seen as a criticism of Conrad Westmaas, whose performance is the only way the character is even half appealing.

        But mainly it's a lack of form.  During what must have been the planning stages for the season, the new television series was announced which meant that the Eighth Doctor audio, like the novels and comics went from being the ongoing adventures of the incumbent incarnation to filling in a gap but unlike the novels and comics, the audios could and had to continue and so Big Finish were somewhat forced by commercial requirements to drag Eighth back into the normal universe before the whole notion of the Divergent Universe itself had enough time to settle in.  Apparently, I've just read now, some of the odder post-Divergent stories are light rewrites of another set of stories which should have appeared within here which explains a lot.  Either way like the Doctor and his companions I'm happy to be out of this now and pleased that unlike back then I haven't got to wait eight months to discover what happens next.

        Zagreus

        Eighteen months on from the Neverland cliffhanger and we were given this. With its massive cast, extended running time across three cds and massive cast it seemed like it was going to the best Doctor Who story ever. Then we heard it. Despite being quite the fan of both Gary Russell and Alan Barnes, I still find parts of it almost unlistenable. It's one of those glorious messes which sometimes crop up in Who, where the writers have the best of intentions, in this case attempting to do something a bit different with the anniversary story by having everyone back but playing different characters and creating a direct continuation of an ongoing narrative arc.  Except the show only really snaps back into place when the past Doctors are effectively playing themselves, we return to Gallifrey for the back door pilot for that spin-off series but in no way is it a satisfactory conclusion to that cliffhanger (perhaps because Scherzo is next).

        Scherzo

        When I originally reviewed Rob Shearman's script it was through the prism of knowing that the series was about to return to television and agog at what a potential new audience might make something which has all of the elements of Doctor Who without actually being anything like Doctor Who.  Now it seems even more alien even though to an extent you can see the DNA of the Capaldi model in the Zagreus infected Eighth.  Having one of the franchise's legendary scenes ("I love you") twisted back in on itself with this Doctor and Charley turned inside out as characters is almost as scary as the actual body horror that runs through the piece.  There's no denying the bravery here.  After returning McGann to the fold, creating an utterly adorable incarnation, Big Finish now turn around and just as happened in the novels and a lesser extent the comics take him away from us.

        The Creed of the Kromon

        Hello C'rzz.  Hello Kro'ka.  Sharing plenty of ideas and themes with writer Philip Martin's earlier Vengeance on Varos and Mindwipe, we're already seeing signs of how although the Divergent Universe arc is supposed to be an exploration of potentially experiment, alien territory, the traditional elements of Doctor Who will always assert themselves.  The spark of the narrative is the Doctor trying to get the TARDIS back.  They stumble into a very bad political situation and ultimately end up toppling a regime after empowering the natives, a by-product of a need to rescue a companion who's been damseled and in this case in the most horrific of ways which I have serious issues with even if its resolved at the end and could easily have been somewhat smoothed over if the writer had considered a way of keep Charley's senses intact and have her taking advantage of her situation.  Horrible to listen to as a piece of audio too due to the abundance of ring modulator like vocal treatment.

        The Natural History of Death

        A cocktail of Orwell, The Macra Terror and UKIP's election manifesto which actually works as well even once you've been apprised of the twist. Like the Doctorless stories of the new era, it's very much describing the viral effect the Doctor's ethos and morality can have on a society or even just a single person. If I've a criticism, its the repetition and duration. As with a lot of the stories in this era, the episodes are of uneven length so as to fill the whole of the cd which in this case does lead to a lot of scenes which say much the same thing in different ways. But that's the price you pay for experimentation and Jim Mortimore is pushing the format to breaking point in a similar way to Shearman in Scherzo.  As I said in the introduction, there's a real effort not to simply try and tell the same kinds of stories which might as well have occurred in the Whoniverse and that's certainly the case here.

        The Twilight Kingdom

        This is the moment the Divergent Universe arc began to confuse me on first listen.  The impression we have from the first couple of stories is that Zagreus still sits inside the Doctor's head and he's manically trying to suppress it.  Yet for all the material about mind control and what not in Will Schindler's script, Eighth is pretty much back to normal and he and Charley are reaffirming their friendship (for all that he's not telling her the real reason for his mission).  Perhaps that's as a result of the original schedule, which saw this stories folded into the monthly releases and mixed in with the other Doctors rather than as a straight run through season so there was a drive towards making them even more internally consistent rather than necessarily with each other.  The story itself is generally an inferior redo of The Chimes at Midnight with a less rigid structure.

        Faith Stealer

        Faith Stealer is the single Big Finish author credit of Graham Duff, prolific actor and the writer of Dr. Terrible's House of Horrible, Ideal and Hebburn (he also played the waiter in Doctor Who's Deep Breath).  He's a real fan too, wrote this assessment of The Horns of Nimon for a DWM special concurrently with this release in 2004.  But as with all these stories, this script is a matter of taste.  The idea is sound, a kind of buyers market for religion and spirituality, but for me at least it never quite seems to get going.  There are some good Pythonesque lines, not least in how the Doctor, Charley and C'rzz become part of the world but I can't help feeling that it would have worked better in the "real" world with "real" religions, though of course that would have run the risk of offending someone but it might have added some bite.  Plus the whole Kro'ka business is really starting to chaff.

        The Last

        If only.  Gary Hopkins, whose I, Davros was a classic example of just how to do epic audio prequels for important and popular characters (seriously, it's amazing and is available for just £20 on audio at Big Finish) misfires here.  Whilst there's nothing necessarily wrong with a slice of Bergmanesque nihilistic melancholia, I'm not entirely convinced that this is the right Doctor or companions or season of stories that it should be containing.  Opening in the after effect of a holocaust in what seems to be intended as a satirical discussion paper on what might have happened if that Thatcher had gone nuclear, all the likeable characters die, the regulars mostly argue with each other and the Doctor loses all of his hope before the whole thing ends by breaking one of the great rules that Doctor Who really shouldn't have anything to do with.  I think this is becoming my least favourite run of stories that don't star Colin Baker or Peter Capaldi.

        Caerdroia

        Good old Lloyd Rose. Rose wrote the astonishingly good two EDAs, The City of the Dead and Camera Obscura and here she is wading into the difficult penultimate story slot in the Divergent arc, The Long Game of The Pandorica Opens of this series, and sets about explaining what the interzone is, what happened to the TARDIS, who the Divergence are and exactly what the Kro'ka's supposed to have been up to. Bloody marvellous in every respect in the end and a highlight of the series, largely due to the second half where the Doctor's randomly split in three and McGann's forced to play his various versions as they search for the TARDIS and a way home, from grumpy through to tiggerish against one another, a difficult task in audio and keep them distinct. but she does and he does.  Is the title a nod to Doctor Who's new home?  This was released in November 2004, when the new series was well into production.

        The Next Life

        Releasing this in 2004, it was quite brave of writers Alan Barnes and Gary Russell to have a joke about the Doctor having memorised of all the Liverpool F.C. strikers and goals from 1964-1965 to 2013-2014, describing the latter as "a terrible season."  Apparently, it wasn't that bad.  They just missed out on the Premiere League coming second and positioned 3rd and 5th in the league and FA Cups.  Though I suppose the Doctor might argue that actually failing to win anything is "terrible".  The Next Life isn't terrible in large part because it's two of the Eighth Doctor's best writers giving everyone some witty dialogue, has the return of Daphne Ashbrook and Paul Darrow to the franchise on suitably bonkers form and manages to wrap up most of the loose ends from the Divergent Universe arc in a pretty logical way, dragging our heroes back into the Whoniverse with a cliffhanger which is both inevitable and necessary.  Good show.


        The Sky of Things by Goatchurch

        I picked up on the Chicago Array of Things by this clip at the BBC.

        Here’s a screen grab of the circuit as the guy is showing off his hardware kit.

        click1

        I was struck by the similarity with the kit I’ve been working with and taking on my (frustratingly infrequent) hang-gliding flights.
        palvario1

        It shouldn’t be a surprise that they look the same. All we’ve done is bought all the cheap mini sensors we can buy and soldered them together onto one board.

        The Chicago group make a big thing about the beauty of their hardware box design, which it is. But what do you DOOOO with the data? This is the question I posed to the Barcelona SmartCitizen Sensor Kit whom I saw at the MakerFaire in Newcastle a couple of weeks ago.

        The answer was the usual: we put the data on the internet so that anyone can download and do whatever they want with it.

        Like WHAT?

        And so on.

        I have to harangue because I am on constantly on the lookout for techniques and people who are where I’m at with this type of sensor data. There are no adequate tools. I am constantly hacking different trials and plots of the data using bare Python and discovering all sorts of things that are important.

        For example, I don’t think these sensors are quite right. The digital compass needs calibration every time you plug it in, and the accelerometer was biased in one of its planes.

        And here is the graph of the humidity in RED, windspeed is the horizontal green squiggly line, and barometric pressure is the white line.
        humidbarow
        The humidity only got to 90%, having started at 70%, which means I only got 2/3rds of the way to cloudbase. Note how as the barometric readings go down (with altitude), humidity is going up, because the air temperature (not displayed) is getting colder and approaching the dewpoint.

        The humidity value is waving about all over the place, and then it settles down as soon as I take off (at the vertical yellow line). Then it begins waving around everywhere before I come in to land 50minutes later (the vertical lines are at 5minute intervals).

        I don’t know the reason for this. Is it the windspeed? Is it the proximity to the ground?

        My experiments on a bicycle have been inconclusive, not least because the vibration of the road tends to shake many of the components loose. And I can’t peddle fast enough.

        You can tell immediately that there’s a lot more to getting something useful from these devices than simply taking the digital readings and assuming it’s the truth.

        We had this problem in cave surveying when digital compasses and clinos came about, and people made totally buggered up mistakes and wildly out of calibration cockups that they never would have done with a mechanical instrument. Just because it’s an electronic number you tend to want to write it down to two decimal places in degrees. But if you were using the analog compass you might have actually noticed the way it swings when you hold your helmet light or steel karabiner too close enough to it to affect the magnetic field.

        We like these digital devices because we don’t need to think. Just put the uncalibrated values on the internet and hope that someone else can do the thinking for you.

        I will only know that these sensor folks are on the same page as me when they get desperate and one day nail every single sensor node they own onto the same lamp-post in order to see to what extent they agree at all. And then I’d like to see the existence of some software that is actually able to correlate the measurements between the different devices to account for their variability.

        That’s my demand. It’s what I call “Test Driven Development”, except the Tests are lavished on actual real things that matter which you don’t know about (such as the repeatability of these measurements), not on pointlessly obvious-when-they-fail bits of self-contained code. That’s the kind of TDD I won’t truck with when I am lost in a sea of uncertainty as to what is to be done. For months I am having to do the equivalent of programming doodling and sketching of ideas. Not possible to produce any finished artwork design.


        Micropache: Cut the crap out of getting Apache running on your Mac by Zarino

        I recently—finally—upgraded my Mac from OS X 10.7 Lion to 10.10 Yosemite. Over the years, I’d become resigned to the fact that Apache is a pain in the ass to get running after any OS X upgrade.

        I develop on Apache/PHP sites so infrequently that it really doesn't make sense for me to go rummaging around system config files and setting up Virtual Hosts for each site. Nor am I really comfortable running MAMP, which requires me to fumble with buttons and checkboxes every time I want to start work.

        Since I’m already in a terminal window—editing files with TextMate and managing source code with git—it makes sense to run an Apache server the same way.

        In any other language or framework (eg: Python, Ruby, Django, Jeykll…) starting a development server in the current directory is as easy as running a single command. In Python, for example, it’s:

        cd ~/projects/some-website.org/
        python -m SimpleHTTPServer
        

        In Jekyll it’s:

        cd ~/projects/some-website.org/
        jekyll serve
        

        In Rails it’s… you get the idea.

        So it hit me, why spend hours (days?) hacking my Mac to run Apache virtual hosts, when I could instead just fire up an Apache daemon in the current directory, serve the files, and be done with it? One command. Simples.

        Turns out it really is that simple

        The apachectl command doesn’t let you run a new server from a given directory, but the lower-level httpd command does. Its arguments are pretty gnarly, and it needs to be provided with a config file (this is Apache after all!). So I wrote something that wraps it all up into a single command: Micropache.

        Now when I come to work on a WordPress site, for example, I cd into the project directory and run micropache.

        It asks for my root password (getting Apache to run on a Mac without root privileges was a challenge I didn’t have time to face) and then starts serving the files at http://localhost on port 80.

        cd ~/projects/some-website.org/
        micropache
        Password:
        [Mon May 11 08:52:53 2015] [mpm_prefork:notice] [pid 39321] AH00163: Apache…
        [Mon May 11 08:52:53 2015] [core:notice] [pid 39321] AH00094: Command line:…
        

        Each HTTP request is logged to the console, and when I’m done, ctrl-C will quit the server, as you’d expect.

        The whole script took me about an hour to write – but it’ll only take you ten seconds to install: github.com/zarino/micropache.

        Combined with brew install homebrew/php/php56 and brew install mysql, you can basically outsource all the customary headaches of getting a local MAMP server running on a new Mac, without ever leaving your terminal.

        Micropache


        May 11, 2015

        Soup Safari #26: Cauliflower and Sweetcorn at Shirley Valentine's Sandwich Company. by Feeling Listless







        Lunch. £2.00 (80p for the roll). Shirley Valentine's Sandwich Company, 109 Mount Pleasant, Liverpool, Merseyside L3 5TF. Phone: 0151 707 8093. Website.


        A message from our sponsors: now with added gaming content! by Charlie Stross

        This has been a busy month for my backlist: "Accelerando" has just been published in French for the first time, and "Halting State" is due out in Italian really soon ...

        But there's something new on the horizon.

        The Phantom League, which came out in 2010, is a space-trading board game in the lineage of "Elite"; you're the captain of a merchant spaceship, exploring new star systems, establishing trade routes, engaging in acts of piracy and otherwise trying to get one up on your rival players. It's a whole lot of fun, and the game has evolved over the past five years, with several expansion packs and updates.

        Well, I'm pleased to announce that the forthcoming second edition is going to be based in the universe of Singularity Sky and Iron Sunrise! In the new game you take a role of an individual, with ambition and a ship, not an Admiral of some mighty armada, conquering planets. It's all personal: your goal is to become the most famous (or infamous) spaceship captain in the whole galaxy—whatever it takes. Here's the Announcement; you can sign up to a newsletter for further updates as the game gets closer to release.


        May 10, 2015

        Macbeth (Arden Shakespeare. Third Series). Edited by Sandra Clark and Pamela Mason. by Feeling Listless

        Theatre  After a couple of years away from the core series whilst they pay attention to other Early Modern Dramas, Arden returns to Shakespeare with their third edition of Macbeth. Glancing through the list which appears in this year’s Arden catalogue there aren’t that many plays still waiting for the edition uplift from the second and a glance through Amazon indicates that by September 2016 everything but A Midsummer Night’s Dream will be available. Not that it will end here; editors for the fourth series have already been announced with those editions due in the 2020 (which just demonstrates the lead time that some of these books require). How these will differ to the A3s, time will tell.

        Anyway back at Macbeth and this edition edited by Sandra Clark, Senior Research Fellow at the Institute of English Studies in the University of London and Pamela Mason currently a lecturer in English at the University of Birmingham. The former provides the introduction and discussion of textual legitimacy in the appendices, the latter is the editor of the text and provides the textual notes including editing justifications also in the appendices. There’s some heroism in this division of labour because while Clark’s work will implicitly be read by the book’s whole audience, Mason’s notes sit within an interstitial consultation space, only referred to if needed which is a shame because they contain a fascinating quantity of trivia.

        The introduction freewheels around Macbeth ignoring anything like a traditional structure or reiteration of the usual themes, this being the sort of play for which there isn’t really a shortage of that sort of thing already. So we have a short discussion of Macbeth as an example of tragedy. A close textual analysis of the use of time in the play. It’s setting and realisation of Scotland as a geographical and historical event. A discussion of its sources but note, its adaptation from Holinshed rather than how that chronicler developed his version. Plus a theatrical “history” which chooses themes a key elements of the play, the extent of Macbeth’s culpability, the pre-eminence of the witches, the setting and how various actors and directors have treated the ending.

        Much of this underscores that like most of Shakespeare's plays, Macbeth is set in "Scotland" rather than a real place, the the featured "history" is nothing of the sort and that when productions do affect accents and have the cast sweeping around in kilts they're deluding themselves with an approach which has about as much legitimacy as Hamlet wearing clogs and a fez whilst affecting a Scandinavian pronunciation.  Which isn't to say Scottish actors haven't made great Macbeths and there haven't been useful productions set in Scotland, it's just that the underlying elements of the play don't support it, not least because in other plays Shakespeare made the Scottishness of characters a key component.

        As is also so often the case with these Ardens, my eye is caught in the appendices which is where the textual discussion resides. For decades, critical mass has focused on the notion that the version of Macbeth we have now is not as originally written by Shakespeare, that its single textual version as it appears in F1 has been interfered with or adapted by another hand, usually attributed to Thomas Middleton, largely because of the similarity with his own play The Witches, notably in relation to some songs. This led to Gary Taylor including the play in his Oxford Complete Works of Middleton’s plays and it’s this analysis that I’ve seen cited as an example of Shakespeare the collaborator.

        Clark reiterates of all of these arguments at length with sources before, like so many A3 editors before her, stripping away the hearsay and presumption to reveal that we actually don’t know anything, that the evidence is circumstantial at best.  She cites an electronic analysis by Marcus Dahl, Marina Tarlinskaya, and Brian Vickers (which is available to read online here) which compares the supposed added passages with Middleton's work and doesn't find a match (though she does note that others have argued against their work because the Middleton database they used it incomplete.  But the general message is that just because the play is short and is interestingly structured in places doesn't mean any of it is missing.

        Macbeth (Arden Shakespeare. Third Series). Edited by Sandra Clark and Pamela Mason. 2015. RRP: £8.99. ISBN: 9781904271413. Review copy supplied.


        Dear IBM, HP, ORACLE, SAP, CISCO ... v2.0 by Simon Wardley

        Dear IBM, HP, ORACLE, SAP, CISCO ...

        Since my last letter, we have heard it is not Oracle or SAP or Microsoft that is interested in buying Salesforce. Maybe that's just bluster, maybe no-one is interested in buying (or merging) or maybe it's one of you or maybe it's someone irrelevant. All of these are good options for you.

        However, on the off chance that someone is actually bidding for Salesforce and it's not any of you. Can I kindly suggest you consider getting together in order to buy it between you as a consortium. 

        There is one company (Amazon) that you should not want to get their hands on Salesforce no matter how unlikely the possibility is.  You should examine the value chains that Salesforce is involved in, compare this to Amazon and consider how Amazon could bring its ecosystem experience whilst exploiting the enterprise position of Salesforce. You should consider what Amazon already knows about the platform business of Heroku (through consumption data). You should review what combination possibilities exist and WHERE your own inertia can be used against you.

        Even if it is highly unlikely, the remote possibility of this pairing should be sending shivers down your spine. You do not want to find yourself in this position. Hence, unless you have information to the contrary, I would consider doing something about it.

        Maybe you'll be lucky. Maybe it's someone irrelevant. Maybe it's no-one.

        This is a friendly warning.

        Kindest

        Simon W

        P.S. I've been harping on about this threat for many years. I know it's unlikely but if I was Bezos then this is the move that I'd make.


        May 09, 2015

        I am Old Labour by Simon Wardley

        I am Old Labour.

        I am a social capitalist.

        I view that social cohesion, competition and the common interest are of paramount and intertwined importance.

        I view that our purpose should be a better and fairer society for all. To create political freedom, I reject the notions of economic and social subservience. I hold to a view of  common interest above self interest. I do not view the market as a source of 'good' or our goal but a tool to be used and exploited to achieve our purpose.

        For me, the market is a mechanism to achieve common interests and encourage competition. It is simply an economic tool.

        For me, competition is a necessity to progress, to evolve, to adapt and to better our national standing.

        For me, social cohesion requires compassion, mobility, opportunity, autonomy and purpose for all. Without social cohesion our future competitive interests are undermined.

        I do not care if "the cat is white or black, as long as it catches mice". I reject the ideological for the practical. I would use all tools available to further our common & national interests.

        I reject the extremes of Marx and Friedman. The myths of 'trickle down', of small Government, of laissez faire, of centrally planned, of privatisation of infrastructural services to monopolies and nationalisation of that best served by a market.

        I hold to the centre ground where market and government are both of importance for competition. I hold to that centre of Adam Smith, Hayek and Keynes.

        I take the position that we should exploit fiscal, monetary and industrial policy to our common benefit. We should not fear to change the market to achieve our purpose.

        I view that Government waste (of which there is much) should be reduced but Government itself should not be. For every £1 saved through efficiency, I would have £1 invested through Government towards our future including measures of direct investment, R&D and reduction of past debt.

        I view that we are capable of dealing with complexity of competition and that our civil service has only be hampered by past management dogma of one size fits all solutions - outsource everything, agile everywhere and the market knows best.

        I reject the extremes of social ideology within the Conservatives but agree with many of their fiscal policies. For me, they don't go far enough.

        I reject the extremes of financial imprudence within New Labour but agree with many of their social policies. For me, they don't go far enough.

        I am more Red than Red, more Blue than Blue.

        My party passed into history in May 1994. One day, it might return. Until such time, I have no-one to vote for.

        I am a social capitalist.

        I am Old Labour.


        Film Handling. by Feeling Listless

        Film .tiff's Reel Heritage event investigated the process of preserving the physical aspects of film and its two primary lectures with media archivist Christina Stewart are online. Firstly, here she is with a primer of all the different types of film available and their various aspects which will answer many, many of the questions which have been prompted by technical commentaries on dvd across time:



        Next, here's Stewart leading a workshop on handling the actual film which is almost a piece of slow cinema in and of itself:


        Due to ion engine failure, PROCYON will not fly by an asteroid by The Planetary Society

        PROCYON, the mini-satellite launched with Hayabusa 2, will not be able to achieve its planned asteroid flyby due to the failure of its ion engine.


        In Pictures: LightSail, Meet X-37B by The Planetary Society

        United Launch Alliance has released photos showing the Air Force's X-37B spaceplane being stacked on its Atlas V at Cape Canaveral Air Force Station.


        Mars Plans Advance (and Occasionally Fade) by The Planetary Society

        In the last two months, there has been significant news about the European-Russian 2018 mission and about NASA’s 2020 rover. NASA also has announced that it would like to send a new orbiter to the Red Planet in the early 2020s.


        What Images Will We Get Back from the LightSail Test Mission? by The Planetary Society

        With less than two weeks before launch, here's an in-depth look at whether the LightSail test mission's attitude control system bug will keep us from seeing pretty pictures taken by the spacecraft.


        May 08, 2015

        Aftermath by Charlie Stross

        Okay, discuss.

        Two notes:

        1. Here's the historic 1945-2010 election turnout chart broken down by UK country. Here are some notes on historic turnout by the Independent, going a little off-message (their Russian owner insisted they back the Conservative party). Turn-out is currently estimated around 62-63% of the electorate, but hit 82% in parts of Scotland, and seems to have averaged around 75%.

        2. Ed Miliband (Labour leader) and Nick Clegg (Liberal Democrat leader) both look likely to resign. Meanwhile the count isn't final yet, but the Conservatives are on course to form a narrow majority (22 seats to declare, 13 needed, LD on 8, so if they get 5 more seats they can form a Con/LD coalition, and 13 to rule outright).

        NB: Play nice. Moderators will be wielding yellow and red cards freely in event of any gloating/triumphalism or sour grapes: let's keep this polite

        UPDATE as of 12:40pm it's a confirmed Conservative majority. Clegg, Miliband, Farage resigning (rumours that they are to be the new Top Gear line-up cannot be confirmed at this time). 30% swing to SNP in Scotland virtually wipes out all other parties—Labour, Conservatives and LibDems down to 1 seat each. Interesting times ahead ...

        MODERATION NOTE

        The following topics keep coming up in the discussion thread. They are nothing to do with the 2015 General Election results, they are derailing, and any further comments on these subjects will be unpublished as soon as I see them:

        * US ethnic politics in the south vs. the coastal states
        * Anti-semitism and its manifestations
        * Whether what Julian Assange is alleged to have done constitutes rape


        200 metres from cloudbase by Goatchurch

        Well, aside from losing the extra day at the beginning by being directed to the wrong hill while everyone flew in the Malverns, then two days of fog and drizzle in the camp site with a leaky groundsheet, and the day after the good one with its 40mph winds that exploded my UV-damaged tent that was then rolled up and shoved dripping into the back of the car minus several items that blew away without my noticing, my single 51 minute flight 7km downwind of Bache Hill as part of the British Open Series competition was not an unmitigated failure.
        tentblown

        I suffer for my sport.

        Becka says I should do more caving, because that’s a sensible sport. You can schedule 7 hours of caving any day of the week you like, go down the hole, and that’s exactly at least as much as you will experience. Hardly ever a disappointment.

        Call me picky, but I won’t be satisfied by 7 hours down the same old muddy wormholes every weekend, just as I can’t be satisfied by a bale of grass for breakfast. I’m not a horse. It’s not a digestible substance except as part of a balanced diet with many other things.

        Here’s a picture from the hill on the day.
        bachhill

        The gopro SD card lost its read-only tag and didn’t work, so I don’t have any pictures from the sky. My head is still spinning.

        Back in the old days when hang-gliding was actually popular (and I was a student with a crap glider that I flew crappily), these competitions were exclusive affairs, where only the top elite pilots from each club were allowed to darken the skies above the playing field.

        Nowadays, they need more newbies, so they’ve established a “club class” with a special easy task, and go to great lengths to make you feel welcome to have a go.

        They’ve even arranged special retrieve car for us noobs, knowing that we’re not in the position to organize our own. I mean, we all like to think that we have the potential to do some amazing XC, but it isn’t realistic. But it could happen. And then you’d lean on some friend to be your retrieve driver for the week, and only ever get as far as the bottom landing field, causing immense humiliation in the eyes of the person who’s just wasted their holiday not being required for something that could only have been important if you weren’t so damn big-headed and useless at flying.

        A huge part of the game is is mental attitude and negative thoughts, and the organizers are well clued up on this, thank heavens.

        Anyway, we drove to the hill. We were advised to take off early. The wind was off the slope blowing from the right. There were 30 gliders on the grass in front of where I’d parked. Suddenly the sky was full of them. Too many. And the wind veered back to parallel along the slope. I knew I had blown it again. Many were struggling to stay up. You could hear certain notes of concern in their voices as they talked over the radio about which end of the slope to go to before they lost it.

        I had fitted a helmet radio the day before, and it was like wearing stereo headphones for the first time ever. I was on another planet, leaning on to the front wires of my glider waiting to take off. I left the mic unconnected, or I might have said: “It’s one small step for a man”.

        A gust of wind came through. I was so spaced out I didn’t notice I’d left the ground. The funny thing about the flight is I don’t remember any wind during the whole experience, until I landed in a field full of sheep and lambs. Then it was blowing a gale.

        It was a joy to fly in the gaggle of gliders. They said you really only had to worry about the two or three gliders at your altitude. These are all top pilots, all doing the same thing (going up and not going down), all turning in the same direction.

        Eventually, after wallowing close to the bracken several times, I got up high, 500m above take-off, to an altitude that the earlier pilots had spoken of reaching, and I continued circling over the trees and the back of the hill. Everyone else rose above me and I lost track of where the heck they were.

        bachgpsside

        Then I lost the thermal, drifted downwind a bit, found another thermal, didn’t cling on to it with all my might, lost that, and came down at the high point of a hill to avoid descending into a one of the many narrow valley where it looked like there would be nowhere to land.

        Two gliders passed overhead like flyspecks while I was packing up. They travelled in long straight lines to find their elusive thermals, and one of them circled until it hit the clouds.

        The worked calculation of how far short I was out from cloudbase by will be included in the next blogpost after I have completed more analysis of the data.


        Irregular Clocks: The Influences of Generous Companions by Astrobites

        Title: Properties and observability of glitches and anti-glitches in accreting pulsars
        Authors: L. Ducci, P. M. Pizzochero, V. Doroshenko, A. Santangelo, S. Mereghetti, C. Ferrigno
        First Author’s Institution: Instiut fur Astronomie und Astrophysik, Eberhard Karls Universitat, Tubingen, Germany
        Status: Accepted for publication in Astronomy and Astrophysics

         

         

        It’s nearly impossible to escape a clock. They’re on our phones, sit on our wrists, hang from our walls, glow in our cars, and tick in our computers.  Less regular but nonetheless largely reliable biological clocks beat within our chests, growl in our middles, and cause our eyelids to droop at night.  And predictably varying with month or season, the oceans rhythmically rise and fall, the Sun rises and sets, the Moon waxes and wanes.  All mark the irreversible march of time and measure our steps into the unknown future.

        To find the oldest and most reliable clocks, one must search far beyond the Earth and among the vast sprinkle of stars across the sky. Pulsars, spinning neutron stars with strong magnetic fields born in the deaths of massive stars, sweep radio (and sometimes X-ray) beams across the universe much like a lighthouse’s lamp, appearing to “blink” or “pulse” as their beam regularly sweeps in and out our view—like clockwork. The stars spin incredibly rapidly; the fastest pulsators, milisecond pulsars, can pulse almost 1000 times a second. If you were standing on the equator of the fastest known milisecond pulsar, PSR J1748-2446ad, you’d be whizzing around and around at 75,000 km/s (about 170 million miles per hour), or about a fourth of the speed of light. The slowest pulsar, PSR J2144-3933, completes a turn once every 8.5 seconds—still much faster than the Earth’s 24 hours! The timing of the pulses of some milisecond pulsars are so reliable that they best atomic clocks, are used to keep time, and have been proposed as the basis of  relativistic positioning systems (the GPS analog for the Solar System).

        The hermits among these dizzily-twirling stars are a fairly predictable lot, slowly spinning down as they radiate photons and possibly a gravitational wave or two or eject relativistic material. However, there are unpredictable members among them: some suddenly jump in spin speed, what astronomers call a “glitch.” The radio pulsars’ more companionable cousins, on the other hand, are a different story.  Some spin up, others spin down or have constant spins, and most perplexingly of all, some randomly switch between spinning up and down. The culprits of their non-uniformity? Their companions. A normal main-sequence companion star, mutually locked into a binary orbit with a pulsar, can eventually balloon in size as it exhausts its hydrogen reserves, evolves off the main-sequence, and begins to burn heavier elements. The pulsar can siphon off part of the outer layers of its evolved companion and/or entraps its winds, then funnel the material by its magnetic fields onto its magnetic poles, where the material is accreted. As the material hits the pulsar’s surface, the material radiates its gravitational potential energy in highly energetic X-ray photons, hence the designation bequeathed to their class: X-ray pulsars.  The angular momentum in the acquired material is not so easily lost, however; if assimilated, the pulsar’s spin-down can not only slow but reverse direction. Curiously, these accretion-powered pulsars, despite their unpredictable spin ups and downs, have rarely been observed to glitch.

        What would it take to observe a glitching X-ray pulsar? This requires knowing what causes a pulsar to glitch, which is yet unclear, but likely lies in the bizarre physics of matter at high densities that are at play in a pulsar. A pulsar is an incredibly dense star, consisting of one to a few solar masses in a radius about a hundredth of the Sun’s at about 12 km—thus on average, roughly a million times more dense than the Sun! Such high densities cause the protons and electrons in the star to form a fluid in the densest part of the pulsar, its core. Moving outwards from the core, the pulsar becomes less and less dense, and these charged ions crystallize into a solid crust.

        As the pulsar’s spin decreases (increases), it attempts to become less (more) oblate. The pulsar’s solid crust, however, resists the change. Over time, stress builds up in the crust, which is eventually released in a sudden shift—measureable in microseconds—to the preferred oblateness, a shift of micrometers (a tenth the width of a hair or less) resulting in a starquake and a jump upwards (downwards) in rotation speed. The authors predict a quake every 10^5 years, possibly longer, for four observed X-ray binaries (for crusts accreting material are thought to be more flexible and thus prone to shifts). Such long timescales means that starquake-induced glitches from accreting pulsars are rare and difficult to observe.

        A second method of producing glitches lies in the behavior of the neutrons in the star. At the high densities in the pulsar, the neutrons are found in pairs and form an unusual state of matter called a superfluid. Unlike the charged ions, the superfluid resists spinning down (up), until the rotational speeds of the superfluid and the crust are significantly different, at which point the superfluid exchanges angular momentum with the ions, increasing the spin of the crust and causing a glitch. The authors calculate that such glitches occur every tens of years in accreting pulsars—much more frequently than starquake-induced glitches—and are thus more promising events to search for. In addition, the authors predicted that the glitches occur over hours rather than seconds and that the largest jumps in rotation speed were about comparable to isolated pulsars. They find that anti-glitches, in which a pulsar suddenly spins more slowly, would appear much like glitches in accreting pulsars, except that the maximum jump in spin could be smaller by a factor of 10.

        Thus it appears promising that with frequent observations of X-ray pulsars, we will observe glitches in accreting pulsars soon.  Such efforts are underway with existing X-ray telescopes onboard Fermi, Swift, and INTEGRAL, and are prime targets for new instruments such as LOFT.

         

        For a detailed review of pulsar glitch models, check out this review by Haskell & Melatos 2015.

         


        OSIRIS-REx – Seeking Answers to the Sweet Mystery of Life by The Planetary Society

        The nature of the origin of life is a topic that has engaged people since ancient times. The samples to be collected by OSIRIS-REx, returned to the Earth in 2023 and archived for decades beyond that, may indeed hide the secrets to the origin of life.


        May 07, 2015

        MARVEL Climaxes. by Feeling Listless

        Film MARVEL have posted a press release about how the Avengers: Infinity Gauntlet films will be shot completely in IMAX or at least a joint customized digital version of ARRI’s new large format Alexa 65, with the directing Russos testing the technology on what's looking increasingly like the direct prequel, Captain America: Civil War.

        First thing to notice is the mention of "IMAX's exclusive aspect ratio" which as anyone who's been following the tussle between actual IMAX and FauMAX will know is a bit of a moveable feast. Since its digital presumably this means the 16:9 like affair which usually turns up in the likes of The Hunger Games and Guardians of the Galaxy rather than the square frame that everyone expected when such things were shot on "film".

        But buried in the text is this quote:

        "The intent with the Infinity War films is to bring 10 years of accumulative storytelling to an incredible climax. We felt that the best way to exploit the scale and scope required to close out the final chapter of these three phases, was to be the first films shot entirely on the IMAX/Arri Digital camera."
        Here we are then, actual notice that Infinity War is acting as a kind of season finale for the MARVEL Cinematic Universe.  Presumably it won't be the end of the end, unless the whole thing fails in the next year or so which seems unlikely given the box office cash Avengers 2: Ultron Boogaloo has made.  Plus there's bound to be a Guardians 3 not to mention 2s for any of the characters handed their own films in Phase 3.

        But it will be the end of the Thanos's glove storyline and has the tantalising prospect of what will come afterwards.  Will it be something as intricate as the jewels business, and will it have tentpole features like the Avengers (assuming the Avengers films don't simply continue)?  My guess is still something along the lines of Secret Wars, or even Secret Wars, but I'll probably be in my fifties by the time that comes around ...

        Updated later:

        MARVEL have also published a press release about the start of production of Civil War. First of all, we have a "synopsis":
        “Captain America: Civil War” picks up where “Avengers: Age of Ultron” left off, as Steve Rogers leads the new team of Avengers in their continued efforts to safeguard humanity. After another international incident involving the Avengers results in collateral damage, political pressure mounts to install a system of accountability and a governing body to determine when to enlist the services of the team. The new status quo fractures the Avengers while they try to protect the world from a new and nefarious villain."
        Which pretty much explains exactly how the adaptation is going to work and also how it puts Steve Rogers front and centre in the narrative and also a cast list, which is filled with the annoying "other films they've been in nonsense which I'll strip away to leave just a cast ... list:

        Chris Evans as Steve Rogers/Captain America
        Robert Downey Jr. as Tony Stark/Iron Man
        Scarlett Johansson as Natasha Romanoff/Black Widow
        Sebastian Stan as Bucky Barnes/Winter Soldier
        Anthony Mackie as Sam Wilson/Falcon
        Paul Bettany as The Vision
        Jeremy Renner as Clint Barton/Hawkeye
        Don Cheadle as Jim Rhodes/War Machine
        Elizabeth Olsen as Wanda Maximoff/Scarlet Witch

        I'm not sure that we understandably knew about Bettany before but there they are The "New" Avengers. But hold on, there's another paragraph ...

        Paul Rudd as Scott Lang/Ant-Man

        Blimey. Oh hold on, there's some more:

        Chadwick Boseman as T’Challa/Black Panther
        Emily VanCamp as Sharon Carter/Agent 13
        Daniel Brühl
        Frank Grillo as Brock Rumlow/Crossbones
        William Hurt as General Thaddeus “Thunderbolt” Ross
        Martin Freeman

        Which makes this an even more stuffed movie than Avengers: Ages of Ultraman but can justifiably allow for some cameos. Black Panther's being introduced before his own film is out.

         But the real surprise here is General Ross as played by William Hurt as he was in the stand alone Hulk film with Edward Norton and what's interesting about that is that Mark Ruffalo suggested MARVEL didn't have the standalone rights to a Hulk film, those still being at Universal. Have those now reverted back to MARVEL in the time it's taken to produce Avengers or is Ross on loan somehow? What will his role be? On top of that, what do Daniel Brühl and Martin Freeman have to do with it? [Updated again: io9 has a potential explanation for why he's there]

        Also in a vaguely related topic, Scarlett Witch has been retconned in the comic not to be Magnetos daughter and not a mutant presumably in an attempt to stop FOX retaining the rights. Perhaps at a certain point they'll decide there was no such thing as mutants and The X-Men have been Inhumans all these years ...

        And on an unrelated topic, isn't it strange that MARVEL haven't schedule a film for November 2019? In 2017 and 2018 there are films out in May, July and November but there's a gap there. Hmm ....


        Theatre on Television. Updates. by Feeling Listless

        TV Having pleaded and implored for there to be more theatre on television, I was entirely remiss in highlighting the broadcast of Juliette Binoche appearance in Antigone at the Barbican on BBC Four the other week, directed by Tim Van Someren and now available on the iPlayer.

        The presentation, from BBC Arts notice, not BBC Drama, was a near perfect demonstration of how theatre can work in the home, visually interesting and with performances that do translate, though it's also true that the heightened, deliberately theatrical requirements of the piece is a factor.

        Also available to watch for the next week is The Curious Incident of the Dog in the Night-Time: From Page to Stage, a Learning Zone collaboration with the National about the making of the production featuring Nicola Walker from Spooks and Doctor Who.  That's also available as clips if you're reading this after that.


        Coding is Driving the Car (and More) by Albert Wenger

        A couple of days ago there was a much favorited and retweeted tweet by the often funny Startup L. Jackson 

        Whenever someone tells me “coding is the new literacy” because “computers are everywhere today” I ask them how fuel injection works.

        Now I happen to think that this is wrong in an important way but right in another and it is important to pick those two apart.

        Let me start by how it is wrong. Coding is knowing how to drive the car, not how to tune or repair the fuel injection. There is a huge amount of coding that can be done without understanding how a compiler or interpreter does its job or knowing about the registers in the CPU or any of the myriad of other pieces that go into making code execute (in fact see below for how that will go even further shortly).

        But even equating coding with driving the car is somewhat short changing it. Because driving the car is still specific to driving a car and won’t let you sail a boat or fly an airplane. Coding on the other hand will let you program anything that’s, well programmable, which in the future will be everything.

        So how then do I believe that the tweet is also right? Well it is right in that what we tend to think of as learning to code today is probably not what coding will be like for most people in the future. Instead for the most part programming will be closer to using IFTTT or Zapier than to writing code from scratch in an editor. Still, you will need some understanding of what inputs and outputs are and grok the idea of breaking a process down into smaller steps that can then be combined to give you a desired result. That will be the essential knowledge about coding for most people and that is in fact a new type of literacy.

        One reason why that’s important is that there is currently a narrative that we can solve labor market problems by just training more software engineers and that there will be a nearly limitless demand for programming skills in the future. That I believe is the equivalent of thinking that everyone will need to be able to repair their car instead of drive it and training way too many car mechanics as a result.


        Ancestors of the Milky Way by Astrobites

         

        690958main_p1237a1

        Hubble Extreme Deep Field. Courtesy of NASA.

        Take another look at the featured image of this post. It’s one you may well be familiar with. The iconic Extreme Deep Field is the end result of Hubble’s sensitive eye staring for weeks at a tiny, empty patch of sky so small that it could be blocked by a grain of sand held out at arms length. Now, look for the dimmest blips of light that hide far behind the larger colorful structures. These blips are some of the first galaxies in our Universe; their remnant light stretched and reddened from a 13-billion-year journey through expanding space. Today’s paper examined the most massive of these ancients in a new way. 

        The canonical formation scenario of the first galaxies is that small perturbations in the initial distribution of matter triggered gravitational collapse, leading to early galaxies that are morphologically irregular, clumpy, and compact. Observations provide credence to this, as the relatively small number of galaxies that have been observed at times less than a billion years after the Big Bang possess these morphological traits. But did the early universe also contain large, rotating disk-like galaxies more like our Milky Way? It’s hard to say, since these galaxies would be rare in the early universe, and both simulations and observations of this early time have so far been severely volume limited. The authors of today’s paper investigated this question by simulating the early universe with a volume about 50 times larger than previous studies using a hydrodynamic simulation called BlueTides on one of the most powerful supercomputers in the world – Blue Waters.

        Screen Shot 2015-05-05 at 5.47.33 PM

        Figure 1. A sample of disk galaxies from the BlueTides simulation at redshift z=8 that were found to be rotating. Both face-on and edge-on views of the galaxies are shown. The top two rows show stellar surface density with older stars colored red and younger stars colored blue, and the bottom two rows show star formation surface density, with brighter colors mapping higher densities. Figure 1 from the paper.

        Evolving a simulation of this size all the way to the present would take an enormous amount of time even with the best supercomputer; the “box” containing the simulation had a comoving volume of 400 megapacsecs cubed. Instead this study only ran the simulation up to a redshift of 8 (less than a billion years after the Big Bang). 36 million galaxies were synthesized by the simulation, and a few hundred of the rare, more massive galaxies (above 10 billion solar masses) were picked out for a look at their properties and morphologies. The earliest rotating disk galaxies observed to date are at redshifts of z<3 (~11 billion lightyears away), so one of the goals of this simulation was to gain insight to these types of galaxies from an epoch 2 billion years earlier. Some of the disk-shaped massive galaxies that formed in the simulation are shown above in figure 1. Using a technique called kinematic decomposition, 70% of the massive galaxies in the simulation were found to be kinematically disk-like and rotating. This is a pretty stark contrast to modern massive galaxies, with only about 14% exhibiting disk-like structure. The half-light radii of the early massive galaxies in the simulation were also compared to those of high-z galaxies observed by Hubble (figure 2). The simulated galaxies had very small half-light radii and were therefore quite compact, more compact than the handful of optical counterparts at these redshifts. The slight underestimate of half-light radii in the simulation is something the authors plan to explore deeper in future studies.

        Screen Shot 2015-05-05 at 5.47.47 PM

        Figure 2. Half-light radii of simulated galaxies at redshifts 8, 9, & 10, as well half-light radii of galaxies observed by the Hubble Space Telescope during three different studies. Figure 4 from the paper.

        Do these new simulations correctly predict the properties of massive galaxies in the incipient universe? We hopefully will not have to wait long to find out. A motivation to perform this simulation is to prepare for the next generation and powerful telescopes on the horizon.  Of particular relevance is the WFIRST satellite – a Wide-Field InfraRed Survey Telescope equipped with a 2.4-m mirror, advanced spectroscopic capabilities, and a PR video fit for a Liam Neeson movie. WFIRST will have a field of view 200 times larger than Hubble’s Wide Field Camera 3, and is planning to survey 2000 square degrees of the deep sky (distances similar to what Hubble accessed in its deep field images). This simulation predicts that WFIRST will find about 8000 of these young, massive disks throughout its survey, whereas the largest area Hubble survey to date had only a 30% chance of catching one of these ancient behemoths. But until the early 2020s when WFIRST is set to launch, we’ll have to count on simulations such as this one to learn about the ancient giant galaxies of the Universe.

        maxresdefault

        Figure 3. Artist’s conception of the WFIRST satellite. Courtesy of NASA.


        Curiosity update, sols 949-976: Scenic road trip and a diversion to Logan's Run by The Planetary Society

        Curiosity is finally on the road again! And she's never taken a more scenic route than this. Her path to Mount Sharp is taking her to the west and south, across sandy swales between rocky rises.


        Sunset on Mars by The Planetary Society

        Long before Curiosity's landing, the description of the color camera made ​​me dream: I imagined what wonderful pictures we could get of sunsets and sunrises on Mars. They finally came on sol 956, the 15th of April, 2015.


        In Pictures: SpaceX Crew Dragon Takes Flight in Pad Abort Test by The Planetary Society

        A SpaceX Crew Dragon rocketed into the sky under its own power this morning, completing a critical milestone necessary to certify the spacecraft for crewed flights in 2017.


        May 06, 2015

        Election Day. by Feeling Listless



        That Day Here we go then ... Norah Jones wrote the following on the eve of the 2004 US election but the lyrics feel just as valid in this moment too.

        'Twas Halloween and the ghosts were out,
        And everywhere they'd go, they shout,
        And though I covered my eyes I knew,
        They'd go away.

        But fear's the only thing I saw,
        And three days later 'twas clear to all,
        That nothing is as scary as election day.

        But the day after is darker,
        And darker and darker it goes,
        Who knows, maybe the plans will change,
        Who knows, maybe he's not deranged.

        The news men know what they know, but they,
        Know even less than what they say,
        And I don't know who I can trust,
        For they come what may.

        'cause we believed in our candidate,
        But even more it's the one we hate,
        I needed someone I could shake,
        On election day.

        But the day after is darker,
        And deeper and deeper we go,
        Who knows, maybe it's all a dream,
        Who knows if I'll wake up and scream.

        I love the things that you've given me,
        I cherish you my dear country,
        But sometimes I don't understand,
        The way we play.

        I love the things that you've given me,
        And most of all that I am free,
        To have a song that I can sing,
        On election day.


        Soup Safari #25: Potato and Leek at Left Bank Brasserie. by Feeling Listless







        Lunch. £4.50. Left Bank Brasserie, 1a, The Beacon, Halsall Lane, Formby, Liverpool L37 3NW. Phone:01704 832342. Website.


        My Favourite Film of 1997. by Feeling Listless



        Film There have been some years in which it's been almost impossible to choose a single film. It's the plague of the cineaste. Some of us can say what their favourite film ever is (you'll see) but outside of that when faced with a limitation, a genre or year, we've seen so many worthwhile, good and potentially meaningful pieces of work that it's then impossible to tie things down.

        Which is essentially me saying that although I've chosen The Fifth Element, it could equally have been Contact, Chasing Amy, Scream 2, LA Confidential, Men in Black, the various Star Wars rereleases, Titanic or The Peacemaker or a dozen other films that year which weren't made in Hollywood. Wilde. Smilla's Feeling For Snow. In The Company of Men. Shooting Fish.  1997 was some year.

        I've chosen Luc Besson's The Fifth Element not just because I think it's a peerless example of production design, of fun and of how single characters like Milla's Leeloo can be so intriguing that they have the ability to eclipse the less impressive, well, elements like Chris Tucker's whatever Chris Tucker is doing and not just because  Eric Serra's soundtrack is also still one of my favourite records, alien and familiar, futuristic and contemporary.

        But I also wanted to acknowledge the circumstances of when I saw it, at the second National Cinema Day.

        National Cinema Day first happened on June 2nd 1996 on the hundredth anniversary of commercial cinema when I was just at the tail end of my final year of university and I spent the day at the Hyde Park Picture House in Leeds watching amongst other things, Wayne Wang's Smoke, a preview of the David O Russell comedy Flirting With Disaster and (I think) the rerelease of Withnail & I amid trailers for other upcoming attractions.

        Here's a promotional film which was created for it that has one of the best pieces of unexpected swearing you'll ever see.



        In 1997 I was back in Liverpool town where I was born and on Sunday 15th June was at a packed Odeon on London Road for the second go around.  As ever there was the usual mix of new releases, rereleases and preview screenings and as I write this remember the order.  The Fifth Element in the cavernous screen one then Scream (for the second time around) followed by One Fine Day, the underrated Pfeiffer/Clooney comedy.

        Here's what I remember about seeing The Fifth Element.  Sitting in about five different seats.  With entry reduced to £1, packed auditoriums led to multiple code of conduct violations.  Even without mobile phones then people just didn't seem interested in bothering to watch the film presumably because it had only cost them a pound to get in.  So I moved around a lot trying to find a seat where I could actually concentrate on the film.

        This wasn't helped either by two kids who kept throwing popcorn at the back of my head.  They followed me twice.  I'd settle down then there they'd be again and I'd feel something getting caught on my hair or bouncing off my shoulder.  Eventually they were chased out by ushers because someone else complained.  Which sounds like me passing the buck but really I've always found it easier to move than anything else.

        National Cinema Day returned the following year but on that occasion tickets were "only" half price so my understanding is that attendances were much lower so it wasn't repeated which is a shame because it was a great way to promote cinema and increase audience numbers.  It'll be interesting to see the effect the end of Orange Wednesdays, its distant discount cousin will have on same.

        Of course having written all of this, I've realised that the premise of this project has to change by a year. I was going back as far as 1897, but I'd not realised that the Lumiere Bros began commercial cinema in Paris in 1896 so I'll have to add an extra year on at the other end. Not that I'm sure what my favourite film of 1896 will prove to be but there's a high probability that the list of potentials will be shorter than from the mid-90s.


        Program a Pacifist Tyranny by ntoll

        Program a Pacifist Tyranny

        Tuesday 5th May 2015 (19:00PM)

        Edit #1 (6th May 2015): Upon re-reading this article with fresh eyes I realise it's very "raw". This is merely a reflection of the underdeveloped nature of the ideas expressed herein. Put simply, I'm interested in how programming relates to the exercise of power. Keep that in mind as you read on. Finally, I welcome feedback, constructive critique and ideas - it helps these ideas to develop. I will ignore vacuous comments that state some variation of "you're wrong because I'm right and I know what I'm talking about". Thoughtful, respectful yet robust argument is always most welcome! :-)

        Edit #2 (9th May 2015): I've made minor changes to simplify and clarify several points. I've also improved the flow by correcting clunky prose.


        Violence is a forced curtailment of one's well-being and autonomy, usually via unwanted physical intervention by a third party.

        To be blunt, the threat and eventual use of violence is how government imposes itself on citizens. In places like the UK, the Government derives its authority to use such onerous power from its legitimacy as a democratic institution: the citizens get a say in who is in charge. Laws created by the legislative and judicial elements of government define the scope of the threat and use of state-sanctioned violence towards citizens.

        Usually violence is not required - the threat suffices. Most people pay fines associated with a traffic ticket: they understand failure to do so would end badly for them (arrest followed by prison). Obeying the law is so ingrained and unquestioningly assumed that it is a habitually formed behaviour - violence is not even appreciated as part of the context.

        Any reasonably intelligent person understands why we have laws: they help people live together peacefully and, one would hope, in a way that is consistently equitable and fair. The figure of Lady Justice atop the Old Bailey is the personification of law as an impartial, objective process without fear nor favour. Put simply, the law and the veiled threat of violence applies equally to all.

        Except that it obviously doesn't and the law is an ass on so many occasions.

        There are any number of examples I could use to illustrate bad laws misapplied in an unequal, prejudicial and discriminatory way. So common is this unfortunate turn of events that I imagine you could think of your own examples. As a result, I'm merely going to bring to your attention the case of Aaron Swartz, a gifted coder and activist who was hounded by legal authorities in the US until he committed suicide. I strongly recommend you watch the rather excellent The Internet's Own Boy (embedded below), a creative commons licensed film about these tragic events:

        What do such cases tell us? While the spirit of the law is "blind" and impartial, the practice and application of the law isn't.

        The authorities understand this and realise that technology can help both in terms of law enforcement and impartiality. For example, in the UK speed limits on roads are often measured by average-speed-check cameras.

        Average speed check sign

        At certain points along a road (and at known distances apart) cameras are positioned to read car registration number plates. If you average a speed greater than the advertised limit then you are automatically sent a speeding ticket.

        At no point in this process is a human actually involved. The cameras never tire, they work without prejudice and they apply their machinations to everyone. Such co-opting of automated computing technology for law enforcement appears to be on the rise.

        What could possibly go wrong?

        Since the Snowden revelations we know that everything we do online is open to the government. If you're technically savvy enough to correctly use encryption, such innocent countermeasures become an automatic marker for the authorities to investigate further. More worryingly, lethal force is becoming automated through the use of autonomous drones and other similar technologies.

        This raises an important question:

        Who defines the machinations of such autonomous computing enforcement devices?

        In a sense, programmers do. The code they design and write encapsulates the behaviour of the speed camera or autonomous drone. Furthermore, with the increased connectivity and programmability of "stuff" (the so called Internet of Things) our tools, belongings and infrastructure are increasingly dependent on code. Our world (more accurately, the things we create in it that make it comfortable, pleasant and interesting) is becoming programmable. Ergo, governments have a new mechanism for imposing upon citizens: software.

        The sanction to force compliance is no longer violence - the government can pass laws to program the world to automatically coerce you, nudge you or persuade you. For instance, imagine a situation where the car of a flagged suspect is automatically and remotely deactivated until certain conditions are met (not dissimilar to the Police intervention in this case).

        Governments can legislate to program a pacifist tyranny.

        Why pacifist? Because the traditional threat of violence is replaced by a threat of non-action or (worse) counter-action on the part of your belongings.

        Why tyranny? Because citizens no longer control or own their belongings. They can't argue with code if it refuses to start their car, nor can they change such code so it more closely fits their requirements.

        Unfortunately, most people don't understand code in much the same way that medieval serfs couldn't read the Bible (so couldn't question the authority of the church). It's not that programming is hard, it is simply not a widely practised skill.

        This is why programming is such an essential skill to promote in education. In order to lead a flourishing self directed life in such a digitally dominated world we must have control over our digital devices both in a physical and programmable sense. The alternative is to allow others, through the code they write, intimate control over our world.

        By the way, it's not just governments that exercise this power: any service that helps you organise your life can do the same. Facebook, Google and the rest are already trying to modify your behaviour (except they're trying to get you to spend money rather than obey the law).

        How can such a morally suspect state of affairs be foiled?

        Our digital world is dominated by centralised entities that hold power and control over our data and devices. Only by decentralising (to avoid points of control and coercion) and engaging humanity to learn about and take control of the computational world will a tyranny of software be averted.

        As Bruce Schneier points out in the following excellent talk on security and encryption, software itself does not distinguish morality from legality - it's merely "capability". Yet capability permits certain forms of behaviour that in turn pose moral, legal and political questions, requirements and possibilities. Furthermore, we're engineering a digital world from a certain point of view that is reflected in the capabilities of the code we create. It is for this reason that writing software is both an ethical and political activity.

        If you're a coder, ask yourself about your own project's capabilities. Work out how it influences, empowers or diminishes your users. If at all possible, promote users' personal autonomy.

        Technology should help humanity flourish (rather than constrain).


        CSI: Universe by Astrobites

        • Title: Type IIb Supernova 2013df Entering Into An Interaction Phase: A Link between the Progenitor and the Mass Loss
        • Authors: K. Maeda, T. Hattori, D. Milisavljevic, G. Folatelli, M.R. Drout, H. Kuncarayakti, R. Margutti, A. Kamble, A. Soderberg, M. Tanaka, M. Kawabata, K.S. Kawabata, M. Yamanaka, K. Nomoto, J.H. Kim, J.D. Simon, M.M. Phillips, J. Parrent, T. Nakaoka, T.J. Moriya, A. Suzuki, K. Takaki, M. Ishigaki, I. Sakon, A. Tajitsu, M. Iye
        • First Author’s Institution: Kyoto University
        • Paper Status: Submitted to the Astrophysical Journal

        As morbid as it sounds, astronomers often compare the study of core-collapse supernovae (SNe), the explosive deaths of stars, to an autopsy of a mysterious, new arriver to a morgue. We often don’t know a lot about the last days of their lives: if they had any partners, recent cases of rapid weight loss or exhibited any strange behavior. All of these factors can greatly affect what we do see: their final moments as luminous supernovae. We think that the stellar history of supernovae progenitors are the main reason why we see such a variety of SNe in the night sky, and we need to understand this history to paint a unified picture of core-collapse SNe.

        In today’s paper, the authors follow a single type IIb supernova, SN 2013df, from explosion to long after its death (~600 days later). As a reminder, SNe are classified by their light curves and spectra; type IIb supernovae are those which first show weak hydrogen lines that become undetectable with time. A famous remnant of a type IIb SNe is Cassiopeia A, shown in Figure 1.

        Cassiopeia A

        Figure 1: A false-colored image of the type IIb supernova remnant, Cassiopeia A. In this image, red is IR data from Spitzer; orange is visible light from Hubble; the blue and green data are X-rays detected by Chandra.

        At late times, most supernovae will dim as the cobalt-56 produced in the explosion converts into iron-56. This radioactive decay has a very predictable light curve. However, in SN 2013df, the authors find something very surprising: the supernova actually dims at a rate much slower than predicted. This suggests that the supernova has some additional energy source.

        We can look at the spectrum for clues to this mysterious energy. As shown in Figure 2, the H-alpha emission line significantly brightens at 600 days and has a distinctive “boxy” shape. This is consistent with the supernova interacting with its circumstellar medium (CSM). As the supernova blast hits a shell of dense CSM, it shocks the surrounding CSM and a reverse shock propels inwards towards the SN ejecta which is catching up to the shock. The H-alpha emission is from the unshocked SN ejecta irradiated by X-rays produced in the reverse shock. This thin-shell, high-velocity emission model will produce boxy emission lines, like the one we see in SN 2013df.

        Figure 2: Spectrum of SN 2013df at different times. At 626 days after explosion, you can see the boxy H-alpha line emerge.

        Figure 2: Spectrum of SN 2013df at different times. At 626 days after explosion, you can see the boxy H-alpha line emerge.

        SN 2013df is not the first SN to show CSM interaction at late times. In fact, we are beginning to see the first hints of two classes of these explosions: those with more extended hydrogen-rich envelopes which have larger mass loss rates in its final days and those with compact envelopes with lower mass loss rates. This may seem contradictory to your intuition: if a star has a large mass loss rate, we would expect its hydrogen shell to be greatly depleted. So what’s going on?

        The authors believe that these classes are actually in a continuum of possibilities which rely on these SNe having binary companions. Exactly when the progenitor begins to lose its hydrogen shell to its partner will affect the properties of the SN. In particular, the extended objects are just starting to lose their hydrogen shells at the time of explosion – which is why they have such high mass loss rates yet so much hydrogen. In contrast, the compact objects perhaps underwent substantial mass loss long before the progenitor decided to kick the bucket.

        In the big scheme of SNe and their progenitors, it seems likely that binary partners play a substantial role in our understanding of these stellar detonations. Long term observational follow-up can help us both look for CSM interactions (as in the case of SN 2013df) and search for binary companions, if they exist.

         


        A week's worth of "RC3" images from Dawn at Ceres by The Planetary Society

        Now that Dawn is in its science orbit at Ceres, the mission has been releasing new images every weekday!


        Mars Exploration Rovers Update: Opportunity Logs Sol 4000, Digs Spirit of St. Louis Crater by The Planetary Society

        After investigating some flat, light and dark toned rocks around Spirit of St. Louis Crater in April, Opportunity chalked up another milestone achievement – the 4000th sol or Martian day of surface operations.


        May 05, 2015

        Talks Collection: UK Parliament. by Feeling Listless



        Politics Something a bit different this week, in this election week, which will shortly be over, thank goodness. In the midst of all the backbiting and shouting about who will form the next government, not much has been said about the institution of the parliament itself, which as anyone who saw Michael Cockerell's superb Inside the Commons series will know is just as much about a building as the people who work there.

        The UK Parliament.  

        Launched about six years ago, this YouTube channel collectively features short documentaries from the BBC and elsewhere about the chamber and how it works (many of which has seen service recently as filler between campaign events on the BBC Parliament channel) as well as snatches of the official tour, lectures from events inside and outside the building and key parliamentary sessions.

        House of Lords.

        The upper chamber has its own channel for some reason with plenty of much shorter interview snatches covering anecdotes about the place as well as explanations of their business.

        TEDx Houses of Parliament

        For the past three years, the House has hosted its own event with speakers including MPs, journalists and academics.  My initial plan for this post was to recreate the 2014 event here, but all of the events have been recreated on various pages linked here.  Last year began with an address from Aung San Suu Kyi.

        Hansard Society.

        "The Hansard Society believes that the health of representative democracy rests on the foundation of a strong Parliament and an informed and engaged citizenry. A charity, founded in 1944 and working in the UK and around the world, we are an independent, non-partisan political research and education Society devoted to promoting democracy and strengthening parliaments."

        Historic Royal Palaces

        For ceremonial purposes, Westminster retains a status as a "royal palace" and features on the fringes of their channel, though I also think that if we're talking about the history of UK government, you can't really overlook the importance of its predecessors, especially Hampton Court.


        A Cepheid Pulsator in an Eccentric Binary by Astrobites

        Today in “why is my star’s brightness changing?”, let’s take a look one of the craziest light curves this astrobiter has ever seen.

        words

        A pulsating Cepheid variable in an eclipsing binary makes for one crazy light curve. Because the pulsation period is close to 48 hours, and observations (black points) are only possible every 24 hours (at night), a pattern known as “beating” appears to the eye. A model of the pulsing, eclipsing system (solid line) illustrates the true nature of this system.

        In the figure above, the black points are brightness observations measured in the near-infrared I-band. The solid line is the model created by the authors of today’s paper to explain what is happening. There are two things going on here: a pulsating Cepheid variable (rapid brightness changes) is being eclipsed by an orbiting companion star (the large dip) every 800 days.

        Classical Cepheid variables are an observer’s best friend. They obey a period-luminosity relation—intrinsically brighter Cepheids pulsate more slowly—that makes them incredibly useful for measuring cosmic distances. The chance discovery of a Cepheid variable star in Andromeda in 1924 by Edwin Hubble was a turning point in modern astronomy. Measuring a huge distance to Andromeda meant that it and other faint, fuzzy blobs in the sky we now know as galaxies were definitively not part of our own Milky Way. The subject of today’s paper is actually located in our neighboring Large Magellanic Cloud (LMC) dwarf galaxy.

        More than a cosmic meterstick

        Cepheids are interesting stars in their own right, too. The pulsing brightness variations happen because the star’s temperature and radius is changing, and they occupy a unique niche of stellar evolution. Not all stars go through a Cepheid phase of life, but those with masses ranging from about 4–20 M do. Some stars can actually cross into the so-called instability strip and become Cepheids more than once! Overall, though, the Cepheid variable portion of a star’s life is extremely short. We can learn a lot about what is physically happening inside stars during this tumultuous time through close observations.

        Or rather, we could learn a lot about what happens inside Cepheid variable stars, if only we knew their masses. As for all stars, a Cepheid’s mass is the key to understanding its fate. Since we lack a cosmic bathroom scale, the only direct way to measure the mass of a star is when you can observe something else orbiting it.

        Until as recently as 2010, we knew of no Cepheids in eclipsing binaries with observable signals from both stars. Today, there are just a few, including our friend OGLE-LMC-CEP-2532, so it is critical to measure their masses as accurately and precisely as possible.

        words

        Radial velocities of both stars in the Cepheid binary (left) and a schematic of the eclipse configurations as seen from Earth (right). In the velocity curves, filled points are the Cepheid with pulsations removed and open points are the companion. A full orbit takes 800 days. The system has an eccentric orbit, which can be seen from the non-sinusoidal velocity curves, and both stars have similar masses. It is not clear if the Cepheid (shown in white, right) actually eclipses a portion of its companion (shown in gray, right) or not.

        As it turns out, this binary star’s orbit is very eccentric, yet the two stars have similar masses. This is apparent from the bow-tie shape of the radial velocity curve above. And while the companion star does pass in front of the Cepheid, the authors cannot say for sure if the Cepheid just barely passes in front of its companion.

        To fully characterize both stars in a binary, you need two eclipses, otherwise the stars’ masses are conflated with the geometry as viewed from Earth. However, just knowing that the secondary eclipse is “close but not visible” helps constrain the inclination in Markov chain Monte Carlo simulations. In the end, the authors are able to measure the Cepheid’s mass as 3.90 ± 0.10 M. This is one of the most accurate mass determinations for a classical Cepheid and will go a long way toward improving our understanding of Cepheids’ inner secrets.


        How to Watch the Humans to Mars Summit This Week by The Planetary Society

        Our friends at Explore Mars are live-streaming their Humans to Mars Summit this week, happening in Washington, D.C.


        Despite Rain Delays, NASA Prepares for Busy Year of SLS Engine Tests by The Planetary Society

        Despite a rainy spring that has caused schedule delays, NASA is preparing for a busy year of Space Launch System engine testing.


        May 04, 2015

        "One of the hardest companions to categorize is Compassion." by Feeling Listless

        News Usually when the "mainstream" media covers Doctor Who, it's still with an eye to anything on television. Well then here's a fabulously off-piste list from the Houston Press, which currently has a shot of Kamelion on its front page and also goes here:

        Compassion
        One of the hardest companions to categorize is Compassion. She was originally a human from a distant colony that through a series of bizarre accidents became a living, sentient Tardis in human form. At the time the Eight Doctor had lost his own Tardis and the Time Lords were looking to capture Compassion to breed their own fleet of sentient ships. This led The Doctor, Fitz and Compassion to go on the run using her as a portal through space and time. Eventually she returned The Doctor to his own ship and left to explore the universe. Like the rest of the Eighth Doctor novels this likely happened in an alternative timeline.
        I'm assuming the image of Tenth talking to Frobisher is from one of the IDW comics by the Tiptons I didn't get around to reading.


        Five key indicators for success by Simon Wardley

        When competing in business, there are five key indicators for success that I'm aware of.

        1) Purpose : That which causes others to desire to follow you without fear and to have harmony with your goal.

        2) Climate : The interaction of  the business with the economic climate and common economic patterns. The conduct of operations with this in mind.

        3) Situational Awareness : Understanding of the landscape and the exploitations of this to your advantage through strategic play.

        4) Leadership : The ability to set a direction of travel and be followed. These include the virtues of leadership - sincerity, humanity, courage, firmness and wisdom.

        5) Doctrine : The organisation itself, the mechanisms of control, the level of appropriate autonomy, the structures used, the mechanisms of governance, the different cultural forms and the methods applied. 

        I have yet to discover five indicators that are better. The above indicators were written about by Sun Tzu, approximately 2,500 years ago. They were called moral influence, weather, terrain, command and doctrine. I've been asked for a list of top 100 strategy books that I recommend reading. After careful consideration, I've now updated my list.

        @swardley's top 100 management strategy books

        1. Sun Tzu, The Art of War.
        2. Read another translation of The Art of War.
        3. Read another translation of The Art of War.
        4. Read another translation of The Art of War.
        5. Re-read any of the above again paying even closer attention to it.
        6. Read another translation of The Art of War.
        7. Read another translation of The Art of War.
        8. Read another translation of The Art of War.
        9. Re-read any of the above again paying even closer attention to it.
        ....
        100. Everything else.

        Seriously, I do recommend reading & re-reading multiple translations of the Art of War / Warfare.


        May 03, 2015

        Big Torchwood Finish. by Feeling Listless



        Audio Not too long ago there were rumours of things going astray, and a great confusion as to where things really are, and nobody will really know where lieth in relation to new radio Torchwood. Here's Gen of Deek reporting John Barrowman mentioning them during an Arrow press conference. Everyone and but probably not his mother assumed that it would be in the form of another Radio 4 thing.

        No. Actually in a moment which probably took everyone surprise which I missed because I was watching the astonishingly rubbish horror, Grace: The Possessed, Big Finish have announced they've secured the license from BBC Worldwide and are producing a series of six audio dramas starring Barrowman initially (and in the first one John Sessions, Sarah Ovens and Dan Bottomley).

        Good grief.  I'm actually very pleased about this.  Other than Children of Earth, some of Torchwood's best hours were on audio both in the Radio 4 series and the linked audiobooks produced by AudioGo.  David Llewellyn wrote the very good PC Andy focused installment of those, Fallout, and he's the author/writer of the first release The Conspiracy (directed by Scott Handcock).  Here's the synopsis:

        "Captain Jack [REDACTED] has always had his suspicions about [REDACTED]. And now [REDACTED] is also [REDACTED] about [REDACTED]. Apparently the world really is under the control of [REDACTED]. That's what [REDACTED] says. [REDACTED] have died, disasters have been [REDACTED], the [REDACTED] have disappeared.  It's outrageous. Only [REDACTED] knows that [REDACTED] is right. [REDACTED] has arrived."
        Along with the UNIT news (and dare they cross them over?) this is Big Finish making strides into new Doctor Who. How long will it be now before we have announcement of new material for the 10th or 11th Doctor (with 9th about twenty years in the future when Eccleston mellows)?  We feel closer and closer to the tipping point.  McGann was five years on from his TV appearance when he began.  It's five years since Tennant left...


        Black Widow Trailer. by Feeling Listless



        Film Funnily enough I've been "campaigning" for years for MARVEL to make a rom com set in the MCU. Just not this. Obviously. The meta-irony in this is overwhelming.


        Exoplanet masses, probably by Astrobites

        Paper: Probabilistic Mass-Radius Relationship for Sub-Neptune-Sized Planets
        Authors: Angie Wolfgang, Leslie A. Rogers, Eric B. Ford
        First author’s institution: UC Santa Cruz
        Status: Submitted to ApJ

        Thousands of transiting exoplanets have been discovered, but for most of these planets we only know their radius and nothing about their mass. With a mass-radius relation, we can infer the masses of all these planets. This paper provides a new, probabilistic mass-radius relation for small planets, and its approach is somewhat unusual…

        The mass-radius relation for sub-Neptune sized exoplanets. The exoplanets with both mass and radius measurements are shown in black. The resulting mass-radius relation is the solid blue line, and shaded blue region shows the intrinsic width of the line.

        Figure 1. The mass-radius relation for sub-Neptune sized exoplanets. The exoplanets with both mass and radius measurements are shown in black. The resulting mass-radius relation is the solid blue line, and shaded blue region shows the intrinsic width of the line. This is figure 4 in the original paper.

        Astrophysical dispersion in the Mass-Radius relation

        A few dozens of exoplanets have both a radius measurement, because they transit their host star, and a mass measurement, because they cause enough reflex motion of their host star that we can measure their mass via the radial velocity method. All of the ‘sub-Neptune-sized’ exoplanets (those smaller than 4 times the size of Earth) with both mass and radius measurements are plotted in figure 1.

        These data can be used to determine the mass-radius relationship for all small exoplanets, a super-useful thing to know because the majority of detected exoplanets only have radius measurements. With a nicely calibrated mass-radius relationship, we can predict an exoplanet’s mass, given its radius. But here’s the rub: the mass and radius measurements for these exoplanets don’t just fall neatly on an infinitely narrow line, there is a lot of scatter. Some of that scatter is produced by the noisy data, i.e. not every measurement is infinitely precise, so one would naively expect the measurements to fall slightly away from the best-fit line (68% of the measurements should fall within 1 \sigma of the line if the observational uncertainties are perfectly Gaussian). But the real question is,  even if you knew the mass and radius of each exoplanet with infinite precision, like you could actually go to the planet with a tape measure and a set of scales and measure those things, would the masses and radii fall on an infinitely narrow relationship then? Wolfgang and authors show that the answer is no, and they quantify exactly how much intrinsic scatter there is in the relation between exoplanet mass and radius.

        Hierarchical Bayesian Modelling

        Hierarchical Bayesian Models (HBMs) are useful when you have layers of parameter dependencies. There are a few different sets of parameters in this particular problem. There are the parameters for the mass-radius relation, which is a power law, and there are the parameters that describe the distribution of masses that are allowed for a given radius. Wolfgang et al. assume a Gaussian distribution is a good description for this, and that Gaussian has a mean (the exact mass predicted for a given radius as computed using the mass-radius relation) and some standard deviation, \sigma. Actually, this \sigma is a function of radius too: there is a smaller mass dispersion for small planets than for big planets.

        So that’s the story. A set of parameters to describe the mass-radius relation and a set of parameters to describe the scatter in the mass radius relation. In a classical “fitting a model to data” situation, there is no extra set of parameters to describe the dispersion and relationships are just assumed to be deterministic (i.e., for a given radius there is one ‘true’ mass). Having these two sets of parameters is what makes this approach hierarchical. What makes it Bayesian is the fact that the authors use prior Probability Distribution Functions (PDFs), (they have beliefs about what the parameter values should probably be), and they explore the posterior PDFs of those parameters (the probability of the parameters, given the data). I won’t go into detail about Bayesian statistics here, take a look at  this astrobite for more info.

        Wolfgang et al. use Gibbs sampling to explore the posterior PDFs of the parameters. Their new model is shown in figure 1. The solid blue line shows the ‘best fit’ mass-radius relation and the shaded blue region shows the dispersion of this relation, \sigma. The previous mass-radius relation, of Weiss & Marcy (2014) is shown as the dashed black line.

        The point of it all

        Say you know an exoplanet’s radius and you want to infer its mass. This model won’t just give you a ‘point estimate’ as an answer, e.g. “this planet should weigh exactly 3.215 times the mass of Earth”, it will provide you with a probability distribution over masses. This is a much more intuitive and informative way to think about measurements and inferences of parameters because almost nothing can be known with infinite precision; everything can be thought of as a probability distribution.

        This paper is an important piece in an ongoing conversation about how we should treat relationships between parameters in general; most relationships are not deterministic! The methods used here should be more widely adopted, and are what makes this work such an essential component of the astrophysical literature!


        May 02, 2015

        Jury Final. by Feeling Listless

        Music The BBC's press release for this year's Eurovision has something I hadn't noted before and I haven't heard mentioned much at all:

        The Jury Final – Friday 22 May

        All qualified countries, including those automatically through to the grand final will perform and each national jury will award their scores based on this performance. The Jury Final is not televised.
        Aficionados will already know about this presumably but I'd always assumed the juries judged on the Saturday night along with the braying masses. How long has this been going on? The Wikipedia says it's the second dress rehearsal and the Eurovision's own website suggests tickets are available. Presumably it's this that Graham Norton et al watch so that they can comment ahead on the night. But does this mean that someone has a fair idea of whose won even before the show goes out?


        Making Slow Television. by Feeling Listless

        TV BBC Four has a slow television season in the coming week, documentaries without narration and very long shots of things happening which is just the sort thing they used to do a lot back in the day before everything became repeats of Timewatch and Michael Portillo on trains (although then they would have given it wall to wall coverage and included a parallel film season with Le Quattro Volte and some late Tarkovsky).

        In any case, here Ian Denyer, the director of one of the strands, Handmade, to talk about the process:

        "The brief was brief: no words, no music, long, very long held shots. I added my own restrictions to this – no shot less than ten seconds, and no movement. On the first recces I investigated the possibilities of single shots lasting five minutes. Having grown up being constantly asked to move the camera more and cut faster, this was a joy. All the action would come to the frame. This was a chance to celebrate craft on both sides of the camera."
        The season kicks off with Frederick Wiseman's National Gallery.


        Hot Shots: How to Trigger Star Formation in the Early Universe by Supernova Blast-Waves by Astrobites

        • Title: The First Population II Stars Formed in Externally Enriched Mini-halos
        • Authors: Britton Smith, John Wise, Brian O’Shea, Michael Norman, Sadegh Khochfar
        • First Author’s Institution: Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK
        • Paper Status: Submitted to Monthly Notices of the Royal Astronomical Society

        The initial mass function of stars – from a weirdo to perfect order?

        Stars are formed via collapse of the interstellar gas into dense cores. The initial mass function of stars (IMF), which means the mass distribution of stars that form in a cluster environment (like this), seems to be pretty universal. However, if we go back in time, or redshift in a cosmological way of thinking, things aren’t that clear anymore. Populations of stars which were formed very early after the Big Bang contain far less metals (elements heavier than helium) than in our direct neighbourhood. This is because metals are formed in stars themselves and therefore the earliest ones had to get on without them. Unfortunately, as often in astrophysics, the stellar population naming system is a bit confusing: the earliest stars and therefore the ones with the least metal content we know about (in fact, their existence still needs to be proven observationally) are called Population III stars. The idea behind this generation scheme is that firstly huge Pop III stars are formed after the Big Bang, explode as supernova and provide the grounds for the formation of the first Pop II stars. These are then the second generation, already containing a bit of metals, but very few in comparison with nowadays Pop I stars like the Sun. However, the details of the transformation from Pop III to Pop II are undergoing a lot of debate, challenged by observations of extremely metal-poor Pop II stars, whose metals must have been injected from a single Pop III supernova.

        Breathe II. life into early gas clouds

        Figure 1

        Figure 1: Left panel: Logarithmic density slices of parts of the evolution of the simulation. The letters indicate important stages: a) the halo in which the Pop III star is placed, b) the dark matter halo, which will be enriched and later on start to form Pop II stars, c) the Pop III star begins to shine and d) explode as supernova, e) the blast-wave collides with the neighbouring halo (from b). The right panel shows the corresponding logarithmic temperatures. The image from bottom right gives the logarithmic metallicity for time e) (normalized to the solar value). Source: Smith et al. (2015)

        To get an idea of how this could work out, the authors of today’s paper run cosmological simulations with the Enzo code, trying to understand the transition from Pop III to Pop II star formation. To do so, they insert Pop III star particles in a complete metal-free gas environment, like the conditions in the early Universe. As shown in Figure 1, these stars live and radiate away their photons, then explode in huge supernovae and finally enrich their surroundings in metals. The blast-wave of the supernovae triggers turbulence (chaotic movement in the gas particles) in the neighbouring gas structures and induce collapse. As the gas collapses toward the center of the gravitational structure it fragments into several pieces of “low” mass, in accordance with the observation of the oldest known stars to date.

        Surprising diversity among the early birds

        As known from cosmology, the build-up of the cosmic web is dominated by dark matter and dark energy. Therefore, the behavior of the baryonic matter, which is the essence of all visible objects in the Universe, follows the clumping and motion of the dark matter around it. The gas falls in so called dark matter halos, which gravitationally dominate all gas, stars, planets, and so on, in their direct surrounding.

        Figure 2

        Figure 2: The top panels show the cumulative density (left) and metallicity (right) in the line of sight for two Pop III stars which exploded in the simulation. The colored circles correspond to the place of enriched halos, the color gives their metallicity. The bottom panels shows the logarithmic density in zooms onto these regions. The right supernova has a lot more neighbouring dark matter halos and thus enriches much more material. Source: Britton Smith, Webpage.

        The enormous blast-waves, ejected by Pop III supernovae, hit these halos and influence the gas inside them. Thus, to get an idea of much matter is influenced in their simulation, the authors check which halos are in the contaminated area around the exploding Pop III stars. The two supernovae and the affected dark matter halos are shown in Figure 2. Most interestingly, these affected halos end up with extremely different metallicities, from 0.0001 – 0.01 the solar metallicity. Even if this channel, the enrichment of gas via only a single supernova, is probably not the dominant process by which Pop II stars are formed, it is very interesting to see such huge diversity in the outcome and might give rise to a transition of the early IMF to the very uniform IMF we observe today.

        If you want to see the simulation in action, please check the video below. For a description of the video, please visit the website of the main author.


        Mars Exploration Rovers Special Update: MERathon Celebrates Opportunity's Marathon by The Planetary Society

        MER mission ops team members joined other engineers and scientists, some who previously worked on the MER mission, to take on the challenge of a relay marathon to celebrate Opportunity's milestone achievement.


        May 01, 2015

        Things which come for free with a map by Simon Wardley

        When you map, you get certain things for free (other than the technique being creative commons).  These "free" things are provided in the figure and they include :-


        1. User Needs. Key to mapping is to focus on user needs.

        2. Components. Maps are built from components (covering activities, practices, data and knowledge) along with interfaces. People talk about building a composable enterprises, a map will take you a long way towards it. They're also extremely useful for organising contract structures.

        3. Flows. Maps contain chains of needs through which money, risk and work flows. Maps are great for examining flows, fault tree analysis, optimising revenue flow and increasing stability.

        4. Context. Critical to mapping is the context provided by the evolution axis. You can use this to determine appropriate methods and techniques (e.g. agile, lean and six sigma or insource and outsource etc) and avoid all the one size fits all pitfalls. You learn how to explore the uncharted and plan for the industrialised. If you want to be composable but you want each component to have the right context then use a map.

        5. Cells. With maps, organising into cell based structures is relatively trivial. Even better you can organise into cell based structures with the right context. In such an environment you can give people not only purpose (i.e. the map and the strategic play) but also autonomy (the cell) and mastery (the context). You also learn how you need multiple cultures not one.

        6. Strategy. Key to strategy is situational awareness and that requires position and movement. Maps give you the ability to identify where you should attack, why you should attack one space over another and determine direction of travel. You can also use maps to learn and building an arsenal of tactical game plays along with learning common economic patterns, how there's multiple of disruption, how to anticipate change with weak signals and how to use and exploit ecosystems.

        7. Communication. Maps are excellent tools for communication between groups - all the business / IT / purchasing alignment stuff is just an artefact of existing methods. They also provide learning environments (i.e. you can learn what works, what doesn't) along with mechanisms to remove silos, duplication & bias, increase collaboration and deal with inertia.

        When people talk to me about composable, context, user needs, cell based structure, strategy, communication, alignment, contract management, risk management, financial flows, appropriate methods, efficiency, exploration, weak signals, disruption, ecosystems, culture, organisational learning ... actually lots of things ... I normally ask for a map. If they don't have one then I prepare myself for a delightful session of blah, blah, blah on subjects they barely know anything about. This is why I tend to only work in interesting organisations where competition is really important.

        You can tell, I've just had to listen to another one of those ... blah, blah, strategy, blah, blah, disruption, blah, blah, innovation, blah, blah, story .... sessions without an ounce of situational awareness to be seen.


        Dear IBM, HP, ORACLE, SAP, CISCO ... by Simon Wardley

        Dear IBM, HP, ORACLE, SAP, CISCO ...

        On the off chance that someone is actually bidding for Salesforce and it's not any of you. Can I kindly suggest you get together and buy it between you as a consortium. There is one company (you already know who) that you should not want to get their hands on Salesforce. Even if it is unlikely, the remote possibility of this should be sending shivers down your spine unless you've got inside info. 

        kthxbye

        PS. Check your value chains, take a good look at inertia.


        Carey Mulligan on Suffragette. by Feeling Listless



        Film In one of their experiments, the Kermode & Mayo show filmed this week's interview with Carey and uploaded it to the celestial cinema. Although the bulk is about Far From The Madding Crowd (and from the clip the photography looks ravishingly painterly) towards the end there are a few minutes dedicated to Suffragette.


        The Horror of Hybrid Cloud and the real reason why you needed a Chief Digital Officer. by Simon Wardley

        In the previous post I talked about Evolution, Maps and Bad Choices. I wanted to express the importance of understanding evolution (i.e. movement) in order to anticipate change. In this post I want to explore that subject more.

        Let us start again with that first map from 2005 which was of a single line of business for an organisation I ran (Figure 1).

        Figure 1 - A map of Fotango


        From the map we had users needs (step 1), the value chain expressed as chains of needs (step 2) and we understood (and could anticipate) that compute was going to evolve from product to utility (i.e. cloud). We could use weak signals analysis to determine this was going to happen soon - in fact AWS launched EC2 the following year. The map gave us position (i.e. relationship of components) and movement (i.e. how things are evolving).

        Now let us explore more. In figure 2, I've focused on the compute aspect. We knew that compute (an activity or what we do) had associated practices for architecture (i.e. how we did stuff) - see step 5. Those practices were best practices for the industry including N+1, Scale Up and Disaster recovery. 

        Now, practices evolve in an identical manner to activities (driven by the same competitive forces) we just call them novel, emerging, good and best. We also had applications (step 6) built on those best practice (step 5) for the product world.  However, compute was evolving (step 3).

        Figure 2 - Practice and Activity



        As compute evolved then the architectural practices co-evolved (see figure 3). Novel architectural practices appeared (design for failure, chaos engines, scale - out) based upon the concept of a more evolved form of compute. We happened to call those architectural practices DevOps (step 7) and applications were built on them. Those practices themselves evolved becoming emerging then good and heading towards best practice for the utility world (Step 8).

        Figure 3 - Co-evolution of Practices.


        This created a situation, show in figure 4 below. Part of the application estate was based upon best practice (traditional) for a product world. We call this LEGACY but I prefer TOXIC IT (see step 9)

        At the same time part of the estate was built on good and evolving practices (DevOps) for the utility world. We call this the FUTURE estate (step 10).

        Figure 4 - Legacy and Future.


        Now, what will happen is that eventually the estate will consolidate on the future estate i.e. applications will be rewritten or replaced and built on the best practice of "DevOps" (see figure 5).

        Figure 5 - The Future.


        We knew this in 2005. I used this knowledge in part of the gameplay of Ubuntu in 2008. What we didn't know was what it would be called, what exactly those practices would be and who would lead the charge. 

        We knew that some companies would resist the change (i.e. have inertia). There are 16 common forms of inertia from political capital, cost of acquiring skills to cost of changing governance of the current estate (including re-architecting) but we also knew that competition means no-one would have a choice. There are some very specific impacts of evolution which creates an effect known as the Red Queen. You never had a choice about cloud, it was never a question of "if".

        However, we also knew that some people would try and somehow have this future world but without changing the past. The first attempts were private and enterprise cloud. As that lost ground, the latest efforts are combining these with public cloud in order to create a hybrid (see figure 6) and step 12.

        Figure 6 - Hybrid.


        The reality is that private and hybrid was always a transitional play. The speed of change however is exponential (known as a punctuated equilibrium). We can clearly see this happening from Amazon's AWS figures. By today, you should already be in the process of (or at least planning) decommissioning your private cloud environments having sweated and dumped much of your legacy. You should be starting your path towards data centre zero. You've had at least ten years to prepare for this and the game has been in play for eight of those. Fortunately I do see this happening with some very large and "traditional" looking organisations (finance, pharma etc). In other cases, well ... this brings me to my last point.

        There are 16 different forms of inertia including social relationships with past vendor, political capital and existing practices. If you're still building out a private cloud or embarking on a private cloud effort today then chances are you've got a very operational CIO. With a few exceptions, these people aren't thinking about gameplay, the impact of future pricing differentials and they probably lack the skills necessary to understand effective use of supply chains.

        Don't get me wrong, there are very strategic CIOs out there but these aren't the problem, in those companies you see adaptation happening already. However if you had found yourself lumbered with a non strategic CIO then these are the people you should have been planning to replace with a more strategic CIO - which after all was the real reason we hired CDOs (Chief Digital Officers). 

        Assuming you didn't do something crass and get lumbered with a non strategic CDO (i.e. constantly waffling on about innovation, disruption and story telling without any clear understanding of the landscape) then now is probably the time to be considering that change. If, however you only hired a CDO because every other company did then heaven help you. Just pray your competitors are in the same boat and your industry isn't interesting enough or has large enough regulatory barriers or cost barriers to prevent anyone else taking a pop at you.

        Your CDO should by now be embedded in the organisation, the costs of acquiring skill sets is only going to increase, there'll be a crunch in demand as enterprises all try to head towards public cloud, the toxic element of legacy will start to show up in your P&L, your cost of trying to keep up with adapted competitors will escalate and somehow you're going to need to be able to navigate this landscape safely. In the next few years then things will really heat up. You should be well prepared and motoring by now and if this isn't happening then it's time to start thinking about pulling that lever and making that switch.


        Victoria Coren Mitchell on Bohemians. by Feeling Listless



        TV She's back. Or rather she's back making documentaries. Only Connect is fine, but since the end of Balderdash and Piffle, I've really missed watching VCM walking between things. Well, she's presenter-leading again on BBC Four:

        For a word used to describe a wide range of eccentric individuals, not many people know how to precisely define what it means to be bohemian and whether it's a label to aspire to.

        Victoria Coren Mitchell is attempting to find out with a three-part series on the history of bohemians for BBC Four, made by Wingspan Productions.

        'Bohemians confuse me tremendously,' the presenter and journalist says. 'I don't know whether to find them exciting and inspiring, or annoying and threatening. Possibly all four at once.

        'From these mixed feelings, I know I must be a bourgeois. But I've never been fully immersed in bohemian circles before. I'll be interested to find out whether I end up running into their open-minded embrace, or running screaming away.'
        Let's hope it's as good as her documentary about The History of Corners (featured above).


        On Evolution, Maps and Bad Choices by Simon Wardley

        It took me about ten years of thinking about strategy before I stumbled on the issue of situational awareness in business and drew my first map (2005).  When it comes to competition there are a number of key factors involved in success. These include :-

        1) Purpose : That which causes others to desire to follow you without fear and to have harmony with your goal.

        2) Situational Awareness : Understanding your landscape, the prevailing economic climate & patterns and exploiting this to your advantage. This is the core underpinning of strategic play.

        3) Leadership : your ability to command and be followed. These include the virtues of leadership - sincerity, humanity, courage, firmness and wisdom.

        4) Doctrine : The organisation, the mechanisms of control and governance, its cultural forms, the methods applied.

        All are necessary components. Often you find them lacking in business. The one area I find to be almost void of substance is situational awareness and strategic play hence my constant focus on mapping. One thing that does slightly peeve me, is having spent ten years looking for a way to map a business and then giving it away creative commons that people insist of changing the axis for NO GOOD REASON. By all means experiment but think about why. Those axis were based upon many many thousands of data points, it's not randomly plucked from the air.

        To show you the problem, let us take a map  from 2005 (see figure 1). The map has position and movement (critical parts of situational awareness). It starts with user need (point 1) contains multiple chains of needs (point 2) and even allows you to anticipate change caused by competition (point 3 and 4). 

        Figure 1 - A Map.


        The most common way people want to change the map is by changing evolution to either time, diffusion, technology maturity or some form of hype cycle. These are all flawed because you lose any concept of movement and hence any ability to anticipate change.

        For example, let us take the compute component from the above map and put it into a map based upon Gartner's hype cycle (see figure 2). Now certainly you can create a value chain (i.e. position) but compute as a product (i.e. servers) were in the plateau of productivity long ago. With the above Wardley map then you can anticipate the development of "cloud" far in advance because it's based upon evolution. With the below HypeCycle map, the hype axis measures hype of an activity and not evolution. There is no way to anticipate evolution. Instead what happens is a more evolved form (i.e. Cloud or IaaS) appears in the "Technology Trigger".

        Figure 2 - Hype Cycle "Map"


        It doesn't matter whether you use hype cycles, diffusion curves or time. You cannot anticipate evolution and hence change (i.e. movement) without an evolution axis. Without position and movement you will never gain better situational awareness and losing this is a bad choice.

        Does it really matter though? Take the map below on the battle Thermopylae. The map (like a chessboard, like a Wardley map) allows you to see position and movement. In this case the map is based upon two geographical axis - North to South, East to West.

        Figure 3 - Battle of Thermopylae.


        Now despite the map giving us position, movement and proving useful -  let's change the axis!  Let's pluck two out of the air, say distance from coast and landscape (i.e. type of terrain). Let us now draw that.

        Figure 4 - A modified Map.


        Can you really tell me that by looking at the above X's which have no positional or movement information then it is obvious we should follow the red lines and block of the straits of Artemisian and force the Persians into Thermoplyae? If you were a solider in Thebes and had been given this map, could you work out where to go? Which direction Thermopylae is in? The modified map by losing position and movement is practically useless. 

        About ten years of thinking went into finding that first map. Thousands of data points helped to build those axis. Ten years of practice since then has improved it from that first map in 2005.  I know, I know you're a "strategy consultant" but do think about position and movement and whether you know what the hell you're talking about.

        You can guess, someone gave me an "improved map" which is improved to the point of rendering it useless.


        The Scottish Political Singularity, Act Two by Charlie Stross

        The UK is heading for a general election next Thursday, and for once I'm on the edge of my seat because, per Hunter S. Thompson, the going got weird.

        The overall electoral picture based on polling UK-wide is ambiguous. South of Scotland—meaning, in England and Wales—the classic two-party duopoly that collapsed during the 1970s, admitting the Liberal Democrats as a third minority force, has eroded further. We are seeing the Labour and Conservative parties polling in the low 30s. It is a racing certainty that neither party will be able to form a working majority, which requires 326 seats in the 650 seat House of Commons. The Liberal Democrats lost a lot of support from their soft-left base by going into coalition with the Conservatives, but their electoral heartlands—notably the south-west—are firm enough that while they will lose seats, they will still be a factor after the election; they're unlikely to return fewer than 15 MPs, although at the last election they peaked around 50.

        Getting away from the traditional big three parties, the picture gets more interesting. The homophobic, racist, bigoted scumbags of UKIP (hey, I'm not going to hide my opinions here!) have picked up support haemorrhaging from the right wing of the Conservative party; polling has put them on up to 20%, but they're unlikely to return more than 2-6 MPs because their base is scattered across England. (Outside England they're polling as low as 2-4%, suggesting that they're very much an English nationalist party.) On the opposite pole, the Green party is polling in the 5-10% range, and might pick up an extra MP, taking them to 2 seats. In Northern Ireland, the Democratic Unionist Party (who are just as barkingly xenophobic as UKIP) are also set to return a handful of MPs.

        And then there's Scotland.

        On September 18th last year, we were offered a simple ballot: "should Scotland become an independent country?" 45% of the electorate voted "yes", 55% voted "no", and the turn-out was an eye-popping 87%, so you might think the issue was settled. Indeed, some folks apparently did so—notably Prime Minister David Cameron, who walked back the Scotophillic rhetoric on September 19th with his English Votes for English Laws speech and thereby poured gasoline on the embers of the previous day's fire. Well, the issue clearly isn't settled—and the vote on May 7th is going to up-end the Parliamentary apple cart in a manner that hasn't happened since the Irish Parliamentary Party's showing in 1885. The Labour party was traditionally the party of government in Scotland; so much so that the SNP's victories in forming a minority government in 2008 and a majority one in 2011 were epochal upsets. But worse is happening now. In the past six months, Labour support has collapsed in opinion polls asking about electoral intentions. The SNP are now leading the polls by 34 points with a possible 54% share of the vote—enough in this FPTP electoral system to give them every seat in Scotland.

        Nobody's quite sure why this is happening, but one possibility is simply that the voters who were terrorized by the "project fear" anti-independence campaign are now punishing Labour for campaigning hand-in-hand with the hated Conservatives. Polling suggests a very high turnout for the 2015 election—up to 80% of those polled say they intend to vote—and if the "yes" voters who were previously Labour supporters simply switch sides and vote SNP this would account for most of the huge swing.

        Even if the most recent polling is wrong and we apply traditional weightings to the Scottish poll results, the SNP aren't going to win fewer than 40 seats in Westminster—almost certainly making them the third largest party and, traditionally, the most plausible coalition partner for one of the major parties. If the most extreme outcome happens, the SNP could have 57 seats, effectively blocking any other party configuration from forming a government except for a Conservative/Labour coalition.

        A Conservative/Labour coalition just isn't conceivable.

        While such a hypothetical chimera would deliver a stonking great parliamentary majority, it would be fundamentally unstable. The Conservatives are seeing their base eroded from the right, by UKIP (who are also cannibalizing the traditional hard-right/neofascist base of the BNP). And on the left, the Green Party is positioning itself as a modern social democratic grouping with a strong emphasis on human rights and environmental conservation. (Full disclosure: I am a member of, and voted by post for, the Scottish Green Party. This is a separate party from the English/Welsh Greens, with distinct policy differences in some areas—for one thing, it's also pro-independence.) I believe that a Lab/Con coalition would rapidly haemorrhage MPs from both parties, either joining the smaller fringe parties or sitting as separate party rumps. It would also devastate both parties' prospects in the next election as large numbers of their core voters are motivated by tribal loyalty defined in opposition to the other side's voters.

        On the other hand ...

        For the past few weeks we've seen the Conservatives use the SNP as a stick to beat Labour with in England ("if you vote Labour, you're letting Alex Salmond run England!"), and the Labour party use the SNP as a stick to beat the Conservatives with in Scotland ("if you vote SNP you're letting the Tories run Scotland!"). Both UK-wide parties are committed to the Union of Kingdoms, and have announced that they will not enter a coalition with the SNP under any circumstances. However, their scaremongering tactics are profoundly corrosive to the idea of a parlaimentary union of formerly independent states—it's worth noting that Scotland and England merged their parliaments voluntarily rather than as the result of war and conquest (although to be fair the Scottish government's alternative was to declare bankruptcy). By setting up a false polarization between Scottish and English interests within the UK, both major parties are guilty of weakening the glue that holds the nations together. The 45% turn-out for independence last September is a sign of how dangerously brittle the glue has become: and the EVEL backlash among Conservatives post-September weakened it further.

        Let's look at the underlying picture in Scotland.

        To understand the roots of the England/Scotland argument, you need to realize that it's all about money. Or rather, about how the money is divided up. The Barnett Formula "is a mechanism used by the Treasury in the United Kingdom to automatically adjust the amounts of public expenditure allocated to Northern Ireland, Scotland and Wales to reflect changes in spending levels allocated to public services in England, England and Wales or Great Britain, as appropriate." It's basically a short-term kluge from 1978 ... that has persisted for nearly 40 years.

        English partisan voters resent it because it allocates a little bit more money per capita to Scotland than to England. (Scotland has a lower population density than England, so faces higher infrastructure costs in providing services in outlying regions such as the highlands and islands.) Scottish partisan voters resent it because it allocates a lot less money per capita to Scotland than to England if you take into account the amount of gross revenue raised in Scotland from taxation—Scotland has an oil industry and England doesn't.

        So nobody likes the arrangement, but like democracy in general, it's better than the alternatives. But we have, since 2010, had a government in Westminster that is in some ways the most politically radical since Thatcher. The outgoing coalition was noteworthy for its support of austerity policies in pursuit of deficit reduction long after everybody else realized that this was nuts. Then they flipped to stimulus spending—on private sector crony projects seemingly intended to funnel tax revenue to rentier corporations. They finished selling off the Air Traffic Control system, privatised the Post Office, outsourced the Coast Guard search and rescue helicopters, are working on the Highways Agency, and have been kite-flying about selling off the Fire Service under George Osborne.)

        Scotland, with inherently higher infrastructure operating costs than England, is going to feel the pain disproportionately in this scenario. So there's strong resistance to public spending cuts in Scotland, and this points to an intrinsically higher level of support for social services than is electorally popular in England. Hence the trivial political observation that Scottish voters lean to the left relative to English voters: it's self-interest at work.

        We also see differences in the Scottish attitude to immigrants. A large minority of English voters (egged on by their media) are fantastically xenophobic this decade—expressions of anti-immigrant sentiment that were the preserve of neo-Nazis in the 1970s are common currency among English voters and media pundits today. However, in Scotland there's a general consensus that the shutdown on immigration is harming the nation—Scotland has different demographic issues from England, and actually needs the inputs of skilled immigrant labour that the English are rejecting.

        Finally, there's a touchstone of the 1980s left in the UK—nuclear disarmament—that has somehow become a raw political issue in Scotland. The UK's Trident force submarines operate out of Faslane, about 25 miles from the centre of Glasgow—Scotland's largest city. There's considerable ill-will about this, because it's perceived as making Glasgow a strategic nuclear target and putting it at risk of a nuclear accident, all on the Scottish taxpayer's tab. Viewed as an independent country Scotland would have no need for a nuclear deterrent and no more desire for a strategic global military reach than Ireland or Norway. Moreover, Trident is a potent reminder of the undead spectre of Margaret Thatcher, who is somewhere between rabies and HIV in the popularity stakes in Scotland.

        These are the wedges threatening to split the union apart. It appears inevitable that Scotland's voters will not willingly accommodate a conservative policy platform dictated by voters in England—and a Labour party that has triangulated on the centre-right since Tony Blair severed it from its previous socialist roots in 1994 is increasingly oriented towards the interests of English voters.

        What happens On May 8th?

        I honestly have no idea, and anyone who tells you they know what's going to happen is lying.

        However, in broad terms there are two paths that a government—whether a minority administration supported by outsiders on a confidence-and-supply basis, or a formal coalition with a working majority—can take.

        They can attempt to save the union. To do this, they will need to address the fundamental need for constitutional reform before tackling the Barnett formula. The best outcome would be wholesale root-and-branch reform—abolition of the House of Lords, reconstitution of the House of Commons as a federal government with a new electoral system, establishment of a fully devolved English Parliament (sitting separately), and full devolution—Devo Max—for Scotland. This would leave the UK as a federal state similar to Germany, with semi-independent states and the central government handling only overall defense, foreign, and macro-scale fiscal issues.

        Unfortunately any such solution will require the House of Commons to voluntarily relinquish a shitpile of centralized power that they have collectively hoarded as jealously as any dragon. And I don't see the existing Westminster establishment agreeing to do that until they find themselves teetering on the edge of a constitutional abyss—and maybe not even then.

        The alternative is that the festering resentment caused by EVEL and revanchist Scottish nationalism will continue to build. Prognosis if this happens: an SNP landslide in the Holyrood parliament in 2016, and another independence referendum with a clear mandate for independence some time before 2020. If we don't see constitutional reform on the agenda within the lifetime of the next parliament, the UK as an entity will not make it to 2025.


        Understanding mapping and why "NO consultants" by Simon Wardley

        Wardley "value chain" Mapping is a technique for understanding your business (including technology) landscape. It's based upon two important principles of situational awareness - position of things and movement and it starts with user needs. Situational awareness is critical from operations to strategic play but "Why No Consultants?"

        1) Mapping requires an understanding of the environment which means you need people working within the environment to described the landscape.

        2) Mapping is a communication tool. It empowers people to take control of their environment and through sharing remove bias, duplication and silos.

        3) Mapping is a learning tool, it enables you to discover what patterns work and helps improve your gameplay.

        Now, unless you intend to hand over understanding of your landscape, communication, empowerment, strategy and learning to a consultancy (i.e. basically become totally dependent upon them) then you need to learn to map yourself within the organisation. I'm sure many consultants would like you to become dependent as they can gouge fees from you from now to the end of your business.

        In UK Gov, there are now several departments mapping, sharing maps, learning and teaching each other.  There's a reason why I made mapping creative commons. It was to give every organisation a means to communicate, learn and improve situational awareness and not to help consultants flog more stuff. Don't squander the gift.


        General Election notes by Goatchurch

        I’ve not been doing as much as I should regarding this General Election. A few leaflet rounds, one canvassing session. After attempting (and failing) to contribute code to the Election Leaflet website, I’ve been handed the job of reading through hundreds of election leaflets each morning to look for anything interesting, which I report by entering it into a google excel spreadsheet. Urgh. But it’s my duty. Takes hours, and I’m going crazy with it.

        Top issues are: NHS more funds, HS2 abolished, Green belt protected, increasing recycling, cutting carbon use, and opposing those ineffective flickering noisy windfarms that clutter up the countryside when we need more flood defences that aren’t going to work due to rising sea levels, you dumb-dumbs.

        Basically, this election should be cancelled for lack of interest. I’ve driven from one end of the country to the other, from Land’s End to Liverpool, then to Newcastle and back to Liverpool, and there are approximately zero election posters of any kind (plus or minus less than 5) in gardens, on walls and billboards. Even the news media is bored to the extent that it barely makes it into the first half of the news hour each day. There is nothing to say.

        Now I’m going camping in a field in Southeast Wales to get humiliated and intimidated at a HG competition for the next few days so I’ll miss whatever comes about internetwise. Be back on Wednesday night in time for the 5am leaflet drop on election day and the count (unless I can avoid it). The real fact is that it’s only the votes that count on the day. Nothing else matters.


        Empathy and Product Development by Albert Wenger

        Technology companies have a tendency towards the quantitative. We like to measure things. And there is a lot that can be measured from page load times to net promoter scores. Even in board meetings the question “have we A/B tested that” is common. But the quantitative can only take you so far. 

        Here is a recent tweet from Patrick at Stripe

        I think the biggest systemic improvement we could make to software and products would be to have a general way to measure user frustration.

        To which I replied the following

        we do – through empathy – which is why end user observation and using one’s own product are crucial

        I have been meaning to elaborate on that in a blog post, so here we go.

        One good definition of empathy is “experiencing emotions that match another person’s emotions.” Or put differently, the way to “measure user frustration” (where frustration is clearly an emotion) is to experience that emotion oneself. There are two ways of accomplishing that: first, observe a user directly and second, put yourself in the user’s position by actually using the product. Surprisingly few companies do both of these well.

        User observation sounds easy but is in fact quite hard. If you want to know how to do it well, I suggest reading “Customers Included” by my friend Mark Hurst. Mark describes a method known as a listening lab, in which you observe one customer at a time without guidance to create as natural a product interaction as possible. There are no task prompts asking the user to take a specific action. Just the occasional reminder to verbalize what they are thinking.

        In order for this to actually result in empathy it is essential to have as many people in the company either directly observe or at a minimum watch video of the observation. A written summary by a user experience researcher circulated to the team does *not* do the job. Why? Because the scientific evidence shows that empathy works primarily through reading facial expressions and body posture.

        Another legitimate and important way to experience the same emotion as customers and thus develop empathy is to actually use the product oneself. I am often surprised how many people inside of companies – and for that matter on the board of companies – don’t use the product. And that includes every aspect of a product, including how a new user would experience it, e.g. go through the on-boarding flow yourself. I strongly recommend that everyone inside of the company do this on a semi-regular basis but especially anyone in the leadership team (which should give you empathy not just for the customer but also your team).

        Now some people may say that’s all well for consumer products but we have a B2B product or a developer product so this doesn’t apply to us. Well it does! Developers are humans. People working inside of companies are humans. They experience frustration just as much. And you need to observe them and put yourself in their shoes.

        None of this means that you shouldn’t A/B test or have other quantitative measure. But all of those will mean very little if you don’t have the qualitative context that only observation and usage can provide. Empathy is central to product development.


        What is Trolling? by Jeff Atwood

        If you engage in discussion on the Internet long enough, you're bound to encounter it: someone calling someone else a troll.

        The common interpretation of Troll is the Grimms' Fairy Tales, Lord of the Rings, "hangs out under a bridge" type of troll.

        Thus, a troll is someone who exists to hurt people, cause harm, and break a bunch of stuff because that's something brutish trolls just … do, isn't it?

        In that sense, calling someone a Troll is not so different from the pre-Internet tactic of calling someone a monster – implying that they lack all the self-control and self-awareness a normal human being would have.

        Pretty harsh.

        That might be what the term is evolving to mean, but it's not the original intent.

        The original definition of troll was not a beast, but a fisherman:

        Troll

        verb \ˈtrōl\

        1. to fish with a hook and line that you pull through the water

        2. to search for or try to get (something)

        3. to search through (something)

        If you're curious why the fishing metaphor is so apt, check out this interview:

        There's so much fishing going on here someone should have probably applied for a permit first.

        • He engages in the interview just enough to get the other person to argue. From there, he fishes for anything that can nudge the argument into some kind of car wreck that everyone can gawk at, generating lots of views and publicity.

        • He isn't interested in learning anything about the movie, or getting any insight, however fleeting, into this celebrity and how they approached acting or directing. Those are perfunctory concerns, quickly discarded on the way to their true goal: generating controversy, the more the better.

        I almost feel sorry for Quentin Tarantino, who is so obviously passionate about what he does, because this guy is a classic troll.

        1. He came to generate argument.
        2. He doesn't truly care about the topic.

        Some trolls can seem to care about a topic, because they hold extreme views on it, and will hold forth at great length on said topic, in excruciating detail, to anyone who will listen. For days. Weeks. Months. But this is an illusion.

        The most striking characteristic of the worst trolls is that their position on a given topic is absolutely written in stone, immutable, and they will defend said position to the death in the face of any criticism, evidence, or reason.

        Look. I'm not new to the Internet. I know nobody has ever convinced anybody to change their mind about anything through mere online discussion before. It's unpossible.

        But I love discussion. And in any discussion that has a purpose other than gladiatorial opinion bloodsport, the most telling question you can ask of anyone is this:

        Why are you here?

        Did you join this discussion to learn? To listen? To understand other perspectives? Or are you here to berate us and recite your talking points over and over? Are you more interested in fighting over who is right than actually communicating?

        If you really care about a topic, you should want to learn as much as you can about it, to understand its boundaries, and the endless perspectives and details that make up any interesting topic. Heck, I don't even want anyone to change your mind. But you do have to demonstrate to us that you are at least somewhat willing to entertain other people's perspectives, and potentially evolve your position on the topic to a more nuanced, complex one over time.

        In other words, are you here in good faith?

        People whose actions demonstrate that they are participating in bad faith – whether they are on the "right" side of the debate or not – need to be shown the door.

        So now you know how to identify a troll, at least by the classic definition. But how do you handle a troll?

        You walk away.

        I'm afraid I don't have anything uniquely insightful to offer over that old chestnut, "Don't feed the trolls." Responding to a troll just gives them evidence of their success for others to enjoy, and powerful incentive to try it again to get a rise out of the next sucker and satiate their perverse desire for opinion bloodsport. Someone has to break the chain.

        I'm all for giving people the benefit of the doubt. Just because someone has a controversial opinion, or seems kind of argumentative (guilty, by the way), doesn't automatically make them a troll. But their actions over time might.

        (I also recognize that in matters of social justice, there is sometimes value in speaking out and speaking up, versus walking away.)

        So the next time you encounter someone who can't stop arguing, who seems unable to generate anything other than heat and friction, whose actions amply demonstrate that they are no longer participating in the conversation in good faith … just walk away. Don't take the bait.

        Even if sometimes, that troll is you.

        [advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!


        April 30, 2015

        Soup Safari #24: Harrira Moroccan at Kasbah Cafe & Bazaar. by Feeling Listless







        Lunch. £3.95. Kasbah Cafe & Bazaar, 72 Bold Street, Liverpool, Merseyside L1 4HR. Phone: 0151 707 7744. Website.


        Public Art Collections in North West England: The Contents Page. by Feeling Listless



        Art On Tuesday I posted the final visit report for this project and since it has gone on for a very, very long time I thought you might find the following useful. It's a list of all the venues as they appear on the contents page of the book along with links to the blog posts.

        Accrington - Haworth Art Gallery
        Altrincham - Dunham Massey
        Birkenhead - Williamson Art Gallery and Museum
        Blackburn - Blackburn Museum and Art Gallery
        Blackpool - Grundy Art Gallery
        Bolton - Bolton Museum, Art Gallery and Aquarium
        Burnley - Towneley Hall Art Gallery and Museums
        Bury - Bury Art Gallery and Museum
        Carlisle - Tullie House Museum and Art Gallery
        Chester - Grosvenor Museum
        Coniston - Brantwood and Ruskin Museum
        Grasmere - Wordsworth and Grasmere Museum
        Kendal - Abbot Hall Art Gallery
        Knutsford - Tabley House and Tatton Park
        Lancaster - Lancaster City Museum and Ruskin Library, Lancaster University
        Liverpool - Walker Art Gallery, Sudley House, Tate Liverpool, University of Liverpool Art Gallery and The Oratory
        Macclesfield - West Park Museum
        Manchester - Manchester City Art Gallery and Whitworth Art Gallery
        Oldham - Oldham Art Gallery and Museum
        Port Sunlight - Lady Lever Art Gallery
        Preston - Harris Museum and Art Gallery
        Rawtenstall - Rossendale Museum
        Rochdale - Rochdale Art Gallery
        Runcorn - Norton Priory Museum
        Salford - Salford Museum and Art Gallery and The Lowry
        Southport - Atkinson Art Gallery
        Stalybridge - Astley Cheetham Art Gallery
        Stockport - Stockport War Memorial and Art Gallery
        Warrington - Warrington Museum and Art Gallery
        Wigan - The History Shop


        And the Saga Continues: The Story of Exoplanet WASP-12b by Astrobites

        Title: A Detection of Water in the Transmission Spectrum of the Hot Jupiter WASP-12b and Implications for its Atmospheric Composition 

        Authors: Laura Kreidberg, Michael R. Line, Jacob L. Bean et al.

        First Author’s Institution: University of Chicago

        Although over 4,000 planet candidates have been discovered, only a handful of those candidates are eligible for atmospheric characterization with current ground and space based telescopes. Namely, in order to confirm a planet’s existence, you just need to detect the shadow of a planet as it moves in front of a star by measuring the brightness of the star (called photometry). Atmospheric characterization (through transmission spectroscopy) relies on the idea that as the planet passes in front of the disk of its parent star, some photons from the star must pass through the thin atmosphere of the planet. Depending on what gases lie in the planetary atmosphere, some photons will either be absorbed or transmitted. The photons you collect, then get spread out over wavelength to produce a spectrum. Therefore you need many more photons, or much a higher signal than from the photometric signal obtained during planet detection.  Because of this, when a planet’s transit produces a high enough signal so that it can be observed through transmission spectroscopy, it is usually beaten to death via the following process:

        1. Party A makes a claim about X
        2. Party B makes a different observation and infers something else about X
        3. Party C can now make their own observation and either support A, support B or make yet another claim about X
        4. And so on…

        One example of this is exoplanet GJ 1214 b, which was observed here and here and here and here. That is not to say that this is a wasted effort. It is not. Today’s bite beautifully demonstrates why the process outlined above is so desperately needed by discussing what lies within the atmosphere of exoplanet WASP-12b.

        WASP-12b is one of the best-studied hot Jupiter exoplanets due to its large size (1.8 x Jupiter’s radius) and small orbital period (just 1.1 days!). There have been at least 17 parties that have made claims about what combination of gases lies within its planetary atmosphere. More specifically, the claims have largely been centered on whether or not the planet has a Carbon to Oxygen (C/O) ratio greater or less than 1. As you might think, C/O ratios tell you the amount of atmospheric carbon relative to oxygen. Meaning, does the planet have an abundance of oxygen-rich species (i.e. water vapor), or carbon-rich species (i.e. methane). A previous bite outlines how C/O ratios could tell us about the evolution and formation of a planet. It has also been suggested that we could classify hot Jupiters by what their C/O ratios are. The saga continues as Kreidberg et al. attempt to reveal the mystery of the WASP 12b C/O ratio.

        Observations with the Hubble Space Telescope

        When observing exoplanetary atmospheres via spectroscopy you want to make sure you have a large wavelength coverage, very good knowledge of the host star’s variability, and as many transits as possible (more info here). Kreidberg et al. used the Hubble Space Telescope‘s (HST) instrument Wide Field Camera 3 (WFC3) to get six transits (previous HST observations only had 1) in the wavelength region 0.82-1.65 microns (previous HST observations only covered from 1.12-1.65 microns). Furthermore, they obtained 428 images of the host star over a three-year period, which gave them an excellent idea of how the star was varying over time. These observations result in a transit light curve, shown below.

        Top is the transit light curve of the Wasp 12 system. Different colors signify the date of observation. Bottom shows the observations minus the light curve model. Residuals show that the model and the light curve are in good agreement.

        Top image shows the transit light curve of the WASP 12-system. Different colors signify the date of observation. The bottom image shows the observations minus the light curve model. Residuals show that the model and the light curve are in good agreement.

        Now what?

        Once you have the data, the process of actually inferring what is in the planetary atmosphere is a completely different ball game. The process of inferring from you data is usually referred to as retrieval. Planetary atmospheric retrieval is a relatively young field but we have been retrieving atmospheric signals from stars for years. Nevertheless, as this bite on stellar models points out, there is often disagreement among the models needed for atmospheric retrieval. And if there is disagreement in the well-developed stellar atmosphere codes, you better believe there is going to be disagreement between the planetary atmosphere codes which are less evolved. Therefore, to boost the credibility of their retrieval, Kreidberg et al. used several different techniques to retrieve their atmospheric signal.

        What do we learn?

        By implementing and comparing several retrieval techniques, the authors were able to deduce (for the first time ever!) that water is present in the atmosphere of WASP-12b. Even though this planet has been observed roughly 17 times, this was the first unambiguous spectroscopic detection of a molecule, which speaks to how our observational techniques and models of planetary atmosphere are improving and evolving. You can see in the transmission spectrum below, that the water band feature at 1.4 micron is very clear and the error bars are well below the signal of the feature. Due to the strength of their signal, Kreidberg et al. were also able to constrain the water abundance to a volume mixing ratio of 10-5-10-2, allowing them to speak to the infamous C/O ratio. Broadly speaking, high water abundances lead to low C/O ratios. So there is no surprise that their C/O ratio is calculated to be approximately 0.5 (i.e. less than one).

        Here the points are the extracted transmission spectrum of the WASP-12b measured with HWST. The blue squares are the best fit model and the shaded regions are 1 and 2 sigma credible intervals. The increase in signal at 1.4 micron is the detected water band.

        Here the points are the extracted transmission spectrum of the WASP-12b measured with HST. The blue squares are the best fit model and the shaded regions are 1 and 2 sigma credible intervals. The increase in signal at 1.4 micron is the detected water band.

        Who is right?

        The authors of this paper point out that their result is in tension with previous studies. Previous studies have also analyzed WASP-12b with high precision data and state-of-the-art models. So where does that leave us? Is one group wrong and another right? The answer to this is not as black and white as you might think. Planetary atmospheres are highly complex, and so confining an entire atmosphere into one number (C/O) might not be very informative. The authors (and everyone else) point out that their retrieval techniques assumes the planet is 1-dimensional (no variations in latitude or longitude). Thinking about it in terms of Earth. Imagine if you tried to claim Antarctica and Brazil could be described by the same temperature profile! One way of alleviating the issue of 1-D models is to observe the planet in several different geometrical alignments so that we could gain an intuition for the variations across the planet’s atmosphere.

        All in all, Kreidberg et al. have done a great job outlining what lessons we can learn from this example… If you want to really understand what is going on in a planetary atmosphere you must:

        1. Observe the planet from as many different angles as possible (i.e. observe the planet when it is passing right in front of the star, right before it is passing behind the star, etc.).
        2. Get the highest precision observations possible, in order to resolve atmospheric features well
        3. Use several different atmospheric retrieval techniques to make sure your answers are not model dependent

        Sounds pretty easy, huh? Hah! Looks like exoplanet observers have a lot of work to do.

         


        [Updated] Good Planetary Support in A Flawed NASA Bill by The Planetary Society

        Casey Dreier gives a brief summary of the House draft bill released the other day that would authorize NASA funding for the years 2016 and 2017.


        April 29, 2015

        AWS and Gross Margin. by Simon Wardley

        AWS has now reported and there is a lot of noise over margins including ample confusion over operating margin vs gross margin.

        A couple of things to begin with. Back in my Canonical days I plotted a forward run rate for revenue of AWS. This was based upon lots of horrendous assumptions and to be honest, I'm more interested in the direction of travel rather than the actual figures. A copy of the output of that model is provided in figure 1.

        Figure 1 - forward rate.


        Now, what the model says is that after 2014, the revenue for AWS should exceed $8Bn in each subsequent year. After 2015, the revenue for AWS should exceed $16Bn in each subsequent year and so forth. Don't ask me what the actual revenue will be - I don't care. I'm more interested in the speed of change of the punctuated equilibrium that is occurring.

        A couple of things to note. Compute is price elastic (it has been for 30 odd years). What this means is that as prices drop then volume increases. Today, I can buy a million times more compute than I could in the 1980s for $1,000. This doesn't mean my IT budget has reduced a million fold in that time, quite the opposite. What has happened is I've ended up doing vastly more stuff.

        This is the thing about dropping prices in a price elastic market, demand goes up. But if you're already doubling in physical size (or as AMZN has stated increasing 90% per year in capacity) due to a punctuated equilibrium and a shift from one model of products to utility services  then you have to be very careful of constraints. For infrastructure there is a constraint - the time, material and land required to build a data centre. What this means is that it is highly likely that Amazon has to carefully manage price reductions. It would be easy to drop prices causing an increase in demand beyond the ability of Amazon to supply. This 'weakness' was the one I told HP / Dell & IBM to exploit back in 2008 in order to fragment the market. They didn't - silly sods.

        However, over time the market will level to a more manageable pace of change i.e. the ravages of the punctuated equilibrium will have passed and we're down to good old price elasticity and operational efficiency. It is really useful therefore to get an idea of how much prices can reduce by. 

        The reason for this is rather simple. Cloud is not about saving money - never was. It's about doing more stuff with exactly the same amount of money. That can cause a real headache in competition. For example, let us say your company has an annual revenue of $10 Bn and spends around 1% of its revenue on infrastructure, platforms and related technology - say $100M p.a.

        Now, what matters is the efficiency delta between your provision and utility services like AWS. Many people claim they can be price equivalent to AWS for infrastructure but often I find that the majority of the costs (e.g. power, air conditioning & other services, building, cost of money, maintenance, space capacity) are discounted by claiming that it belongs to another budget or just ignored (in the case of capacity cost). This is why I tell companies that they really need to set up their IT services as a separate company and force it to run a P&L. Hardware and software costs usually only account for 20%-30% of the actual cost of the services 'sold'.

        Oh, as a hint if you're a CEO / CFO and your CIO says they're building a private cloud comparable to AWS then the first question you should ask when looking at the cost comparison is "What % of the cost is power?" If they bluster or say it's covered elsewhere then you're likely to be building a dud. Start digging into it.

        The other problem is that people compare to AWS prices today and ignore future pricing. The problem here is that if there is a high gross margin for AWS then as the constraints become more manageable then prices will drop to compensate and increase demand. When you look at the problem through a lens of future pricing and actual cost then in some cases you can easily reach a 20x differential. 

        But what's the big deal? What if your competitor reduces their infrastructure, platforms and related technology costs from $100M to $5M, that's only $95M saving and what is at stake is the whole $10Bn revenue. It sounds risky? Wrong.

        Your competitor won't reduce their cost through efficiency, they'll do more stuff. So, they'll spend $100M p.a. but do vastly more with it. In order for you to keep up then using an "old" and inefficient model you'll need to be spending $2 Bn p.a. just to keep up. That's not going to happen. What is going to happen instead is your competitor will be able to differentiate and provide a wealth of new services faster, quicker and more cheaply than you in the market until you are forced to adapt. But by then you'll have lost MaSh and the damage will be done - not least of all because marketing & biz will be at the throats of IT more than ever. You have no choice about cloud unless you can somehow get the actual costs down to match that future pricing. Very few have the scale and capability to do this. 

        So, how low can that future pricing go. Looking at AWS report, some are saying they only make 17% Margin. First of all, that's Operating Margin which covers all Operating Expense (i.e. all costs bar tax and interest). This will include, unless US reporting rules are somewhat different to what I remember :-

        1) cost of providing Amazon's own estate.
        2) cost / capital leases / staffing costs / depreciation for any future build - NB given AMZN is doubling in capacity each year then this will be significant.
        3) SG&A costs which tend to be high when building up a business.
        4) development costs for introduction of new industrialised services.

        Many of these operating income costs are likely to reduce as a percentage as we pass through this punctuated equilibrium (i.e. as we move towards using utility services as a norm). The speed of build up of new data centres and investment in future capacity will become more manageable (controlled by price elasticity alone). The amount spent on sales and marketing to persuade people of the benefit of cloud will reduce (we will just be using it) etc.

        To give an idea of what the potential future pricing might be then you need to look at gross margin i.e. revenue - cost of good sold. However, AWS doesn't give you those figures (and for good reasons). Furthermore AWS is made up of many different services - compute, storage etc - and the gross margin is likely to be very different on each of those.

        Now, if you simply look at the revenue changes then AWS accounts for 37% of the growth of AMZN in 1Q. By taking the operating expense items covering technology, fulfilment, marketing and SG&A and making an awful assumption that all areas of business are equal (likely to be a huge underestimation) then you get a gross margin figure of around 50% for AWS.

        You could get a more accurate picture by profiling the lines of business based upon past reports etc but I can't be bothered to spend more than ten minutes on this as it's not my area of interest. However, I don't think it's unreasonable to expect AWS gross margins to be north of 60% based upon this and experience. This matters because it gives you an idea of how much future pricing cuts could be and that's not even factoring in efficiency in supply, Moore's law etc.

        If you're looking at AWS figures and going 17% operating margin is high but there isn't too much scope for price cuts then you're brewing for a shock. Consider yourself warned and put some effort into actually analysing the figures.

        NB. I retired from Cloud back in 2010. Don't ask me to put any effort into detailing this more. I have bigger fish to fry and have close to zero interest in this subject. I only put this up because I keep seeing elementary mistakes being made. This stuff doesn't even cover the enormous ecosystem advantage that Amazon creates or the basic benefits of componentisation. Don't underestimate those either - you will get spanked if you're trying to compete without understanding this stuff.


        My Favourite Film of 1998. by Feeling Listless



        Film After having waited eagerly to see Shakespeare in Love since seeing a preview in Empire Magazine (welcome to the 90s), I inadvertently managed to see a snippet of its concluding moments having blundered into the wrong screen at a multiplex. In the late 90s, I’d often travel out to the newly opened Showcase Cinema on the East Lancs Road and spend an afternoon seeing two or three films and on this day at the beginning of February 1999 (which also included A Bug’s Life) my excitement got the better of me and I managed to not bother to look at whichever screen was listed on the ticket and blundered into the wrong one. I saw Will and Viola kissing and which as everything shook out didn’t turn out to be too much of a spoiler.

        Although I can trace my love of Doctor Who to a single moment in an audio episode, there isn’t really one single incident which led me to offer myself up as a fan of Shakespeare. There was studying Othello and Measure for Measure at school of course and I was pretty impressed after seeing the BBC adaptation of the latter but I think that probably had more to do with a crush on Kate Nelligan as Isabella, which is ironic considering what the play is about. But it was enough of a spark for me to want to see more of his plays especially in adaptation, especially if directed by Ken Branagh. Plus I remember watching a lot of the BBC’s Bard on the Box season in 1994 and still have the VHS of the Playing the Dane documentary from then.

        Shakespeare In Love must certainly have also helped. Although I understood the whole thing to be an artifice and a fiction, the screenplay, which aided by Tom Stoppard’s rewrite has enough in-jokes and truths which coupled with my own shaky memory of background reading at school to convince me that it might as well be mostly true. Not the love story or the process of writing Romeo and Juliet. But the recreation of the theatres, of London, of customs, of costumes and the way people presented themselves. The cleverness of Stoppard utilising many of Shakespeare’s own narrative devices, a model utilised again later by the makers of Becoming Jane, which deliberately has the style of a film adaptation of an Austin novel.

        There have been other versions of Shakespeare’s life, the BBC’s A Waste of Shame, ITV’s Will Shakespeare, Anony … (cough) and taken together they offer different facets of the man and his time. But none of them quite capture the romance of what it must have been like to be a playgoer in that period, version that attendees at the Globe in London must have in their heads. From the opening pan across the rafters of the Rose and the opening bars of Stephen Warbeck’s music, I ache and it’s an ache that continues throughout. Few films have given me that sort of emotional reaction before anything related to story or character have kicked in, even Saving Private Ryan which I know everyone now thinks should have won the Oscar that year.

        The release came and went and then six months later I won a VHS copy of the film from Empire, which I must have watched a dozen times. Then when I bought my first dvd player from Tesco, the venerable Wharfedale, one of the first films I hired from the Central Library in town (along with Ghostbusters) was Shakespeare in Love so I could enjoy the settings in the correct aspect ratio again marvelling at the detail and watching all the audio commentaries. Like so many of the films on this list, I can trace them through the various formats I’ve owned them in. Not that I have the blu-ray of it, which is something I must to rectify. But I do have Stephen Warbeck’s score on cd, which was the soundtrack to my visit to Stratford-Upon-Avon.

        To complete this narrative thread, the other project which really crystallised my love of Shakespeare and made much of that visit to Stratford so familiar was Michael Wood’s series In Search of Shakespeare broadcast in July 2003 (and even which I oddly failed to mention on this blog). Here was the pageant of the writer’s life spread across four hours and a real explanation of why his words were important and mattered but with just enough mystery for someone like me to want to go off and read more and to see more. Which I did, purchasing the complete collection of BBC adaptations not long afterwards and that was pretty much my fate sealed and that’s how Shakespeare In Love helped me fall in love with Shakespeare.


        Investing in Deep Learning: Clarifai by Albert Wenger

        Today’s blog post is over at USV.com announcing our investment in Clarifai, a deep learning company based in New York City. I have written a fair bit about machine learning and artificial intelligence here on Continuations. We have made a number of investments in this area already including HumanDX and Sift Science. I believe that we are at the beginning of an extraordinary expansion of the possible. And I am thrilled to be supporting another effort with Clarifai.


        Why is star formation so inefficient? by Astrobites

        Why easy when you can make it complicated?

        The historic picture of star formation (SF), that interstellar gas directly collapses due to its own gravity, bears problems of oversimplicity. Incorporating this view into simulations of the process results in a crazy high star formation efficiency, which means that much too much gas is turned into stars. A long time, it was thought that magnetic effects like ambipolar diffusion, in which charged particles coupled to the magnetic field slow down uncharged ones by collisions, hinder the efficiency of the star formation process and reduce the star formation rate to levels observed. However, this picture requires the centers of molecular cloud cores, where stars are created, to be extremely dense and their envelopes to feature high magnetic fluxes, which is unfortunately not observed. Thus, there must be other mechanisms helping to suppress that all gas gets turned into stars quickly.

        Turbulence and other stuff…

        Several mechanisms were proposed to overcome the problem of the magnetic influence and simultaneously diminish star formation rates to realistic values. In this study the author runs magnetohydrodynamical (gas + magnetic fields) simulations of star formation with the FLASH code and tests the influence of different mechanisms on altering the star formation process. The effects incorporated in the simulations are:

        • Turbulence, the random motion of fluid particles, can stabilise the gas clouds on large scales against gravitational collapse.
        • Magnetic fields might still play a role via magnetic pressure, which adds a gravity-opposing physical pressure to the clouds, thus reducing the collapse rate and therefore the SFR.
        • Stellar feedback, like jets and outflows, produced by a protostellar accretion disk, can alter the cloud structure and produce even more turbulence by shocks.

        In the simulations performed one physical mechanism is added each time, starting from a relatively simple simulation to a more and more complex one with all ingredients turned on. The following video (created by C. Federrath, make sure to watch the video in HD) shows the density of gas along our line of sight, indicated from white (low density) to blue (intermediate density) and yellow (high density). Stars that have been successfully formed are shown as white dots. Each box represents a simulation with a specific combination of the above mechanisms (check the descriptions in the top left).

        So what do we see? Obviously, in the simulation with gravity only (top left) a lot of stars pop up extremely quickly. Adding more and more mechanisms from the above list decreases the star formation efficiency and star formation rate, as nicely visualised in Figure 1.

        Description

        Figure 1: Rate of star formation versus time including different physical mechanisms. The black dashed region indicates the area which is constrained from observations. The SFR is much too high for more simple simulations. Adding more complexity by considering turbulence, magnetic fields and stellar feedback by jets and outflows decreases it to more realistic values. The sudden drop in SFR at t ~ 4 freefall times for the feedback simulation (black line) is supposed to relate to self-regulation of the feedback mechanism (see text). Source: Federrath (2015)

        As it seems, each of the ingredients used in the more and more complex simulations reduce the SFR by about a factor two!  This means, turbulence, magnetic fields and stellar feedback seem to contribute approximately equally to the needed decrease in the SFR. Additionally, the sudden drop in SFR in the most complex simulation shows signs of an emergence phenomenon: After an initial burst in star formation by the collapsing gas the jets and outflows of the forming stars trigger turbulence in the surviving gas, which increases fragmentation, leading to more smaller clumps with an overall lower SFR. From that point onwards the SFR stays rather constant, which can be seen as a self-regulation mechanism.

        Reality check and ways to go

        The most notable result from this work is that the simulations reached a star formation rate similar to the rate observed in molecular clouds . The most complex simulation features an SFR ~ 0.04, in comparison with SFR ~ 0.01 for observations. From that, the author concludes that the role of turbulence and magnetic fields are likely higher than speculated by recent computational results, which underlined the importance of stellar feedback but were not able to reach values as low as this. Eventually, the author argues that further reduction of the SFR in theoretical studies needs to consider additional types of feedback like done in this study. A possible candidate here might be radiation pressure, which is the influence of stellar irradiation on the surrounding gas.


        Looking Down On Jupiter's North Pole by The Planetary Society

        Ted Stryk shares the most direct view of a Jovian pole ever captured by a spacecraft.


        Russian Resupply Ship Spins Out Of Control after Reaching Orbit by The Planetary Society

        An International Space Station-bound cargo craft is spinning out of control in Earth orbit following an afternoon launch from the Baikonur Cosmodrome in Kazakhstan.


        April 28, 2015

        Public Art Collections in North West England: The Walker Art Gallery. by Feeling Listless



        Art  The final end. Back in 2007 when I began this project, to visit all the venues listed in Edward Morris’s book Public Art Collections in North-West England, I hadn’t actually planned to visit all the venues listed in Edward Morris’s book Public Art Collections in North-West England. As I said in that original post, for the Atkinson Gallery in Southport, I originally planned to “take some trips to a few of these local smaller galleries and report back on what I find”. The blog doesn’t then have a later post where I actually say I’m going to “catch them all”, but there was definitely a moment some time in about 2007 or 2008 when I decided that I might as well.

        It’s probably about then I determined that it would be best to leave The Walker until last because having worked there, being so familiar with the collection, it seemed more valuable to head out and visit the places where I’d never worked and was unfamiliar with the collection. Then, I was only seven years out from that employment. Now it’s fifteen years. Of course, I’ve been to the Walker in between, many times, for temporary exhibitions, but on each and every occasion I’ve avoided looking too closely at the permanent collection because I knew at some point I’d be approaching it as part of this project. The quest is the quest. Or rather was. Now.

        Do I need to talk about my time at the Walker? Perhaps I do. This was at the end of the 90s, when I was contracted for one or two days a week and my business was collating together various volunteer projects in which items in the collection were added to a computer database together and completing the job by giving every object in the collection a thorough computer record based on internal archival documents. Ultimately I was cataloguing the collection and readying the data so it could be uploaded to the newer systems coming on streaming. As I wandered around, I wondered if the information on the walls was the same as I typed in back then.

        In truth, I visited the Walker twice for this project in the end. My first attempt was last October with the idea that I’d complete the project before my fortieth birthday. But the gallery having so much art and an eye infection (yes, really) meant I only managed the first three rooms that day. So I returned yesterday to complete the survey noting that some of the paintings I’d seen in those first three rooms were no longer on the walls.  I could have spent even longer but at a certain point I have to put a stop to all this and if the gallery wasn’t as geographically convenient I wouldn’t have had a choice anyway. I had to wise up.

        As you might expect given that he was a curator at the gallery until his retirement in 1999, two years before the publication of the book, Edward dedicates fourteen pages to the Walker including four for illustrations. I’ll provide the usual synopsis in a moment, but it’s important to stress that unlike most of the other galleries in the book, the Walker as with Sudley House and the Lady Lever is a national institution with the same status as the London galleries. As of 1986 it stepped outside of local authority control, gaining its funding from central rather than local government.

        Yet despite that, it still retains an element of obscurity. Perhaps I should whisper this, but there are still people I’ve met visiting Liverpool for the first time from the south, who I still have to recommend the Walker to or have stumbled into it and told me afterwards how surprised they were not just that it exists but also the quality of its collection. Even now. Even in 2015. When I began this blogging project, it was with the aim of promoting these local venues, to demonstrate the quality of the work on display and that’s still vitally important, reminding people that as they glance towards London with envious eyes, there’s some fabulous art on their own doorstep.

        The Walker’s collection began with a bankruptcy. In 1816, William Roscoe found himself at the sharp end of an economic downturn and his art collection, much of it from 1300 to 1550, was liquidated. Luckily for us it was sold to a group of his philanthropic friends, Liverpool merchants with nonconformist attitudes who then presented them to the Liverpool Royal Institution, a cultural club founded by even wealthier merchants and this then became the first public art collection in the country (albeit on technically own privately and with a visitor charge) and the model for many of the future examples in the book.

        But despite the publication of a number of thorough catalogues and the purpose building of a venue to house them between 1840 and 1843, Edward says, the collection did not prove popular and in the early 1850s, Liverpool Town Council attempting to take over the institution and its collections as the basis for a municipal art collection as per other local authorities. But the institution’s members resisted, negotiations collapsed and by 1893 they were deposited on-load to the Walker Art Gallery then finally presented to them in 1948. At which point, I think you will have noticed, the narrative becomes slightly more complicated.

        The Town Council, with the support of Roscoe had already been holding exhibitions of contemporary at various intervals between the end of the 18th and beginning of the 19th century, which began with the support of the Liverpool Institution but continued under the control of a group of local artists calling themselves the Liverpool Academy. These ongoing exhibitions, from which the town council was also purchasing items for its permanent collection, were originally presented in the old Liverpool Museum until in 1873 the local brewer, Andrew Barclay Walker gave the council £25,000 to build a new dedicated art gallery which opened in 1877 for them.

        So the initial foundations of the collection were built from the Royal Institution and the local council’s purchases from the Liverpool Academy’s Autumn exhibitions and years before the Tate and other major provincial cities. But the process of increasing the collection doesn’t differ markedly, a mixture of purchases and bequests though with the eye of a national gallery, with concerted efforts to bolster various aspects of the collection to reflect various art eras and movements. In 1961, for example, a £70,000 appeal specifically directed at industry and commerce in Liverpool was for the purchase of impressionist paintings.

        Which explains why the collection has such range and depth and punching above its weight as a “local” museum, why it seems so surprising to visitors who might not otherwise know of its existence. As well as the medieval collection, which is as good in some aspects as the National Gallery in London and the pre-Raphaelites which rivals Tate Britain, we have Murillo, Rubens, Hogarth, Poussin, Seurat, Degas, Monet, Cezanne, Matisse, Freud, a few Gainsboroughs, some Stubbs, a Rembrandt and a Hockney (thanks to the John Moores Painting prize arguably the successor to the Autumn exhibition and also the source of many purchases).

        As you can see from the room guide, the gallery arranges its collection in chronological order beginning with the Medieval and Renaissance period through to “1950-now” the final room offering a series of changing displays. There’s also a semi-permanent display of John Moores Painting Prize winners, a sculpture gallery and a relatively new Craft and Design gallery installed in the space where my office used to be. There’s an overall atmosphere is of grandeur and unlike some other regionals, after navigating the massive entrance hall there is a display area to match, large rooms filled with massive art works.

        All of which means it is impossible to really approach the “what I saw and what liked” section of these posts in usual way since as with Manchester Art Gallery, it is collection of range and depth. The BBC’s Your Paintings lists 2,254 oils and clicking on any of the search pages reveals a platter of works that would be the entire display of some of the places I’ve visited in the past decade. So I’ve decided to utilise the same arbitrarily chosen theme and concentrate on the works either directly or somewhat related to Shakespeare, concentrating on those items which are actually on display (sorry, Robert Fowler’s Ariel).

        In the first set of rooms we find next to each other a portrait of Henry VIII attributed to the Workshop of Hans Holbein and of his daughter Elizabeth I attributed to Nicholas Hilliard.  The former is the classic, iconic image of the king as appears on dozens of different portraits all with the same grand pose if different costume.  The Walker version is especially similar to the portrait at Petworth House.  The National Portrait Gallery website has a lengthy article analysing the "Hilliard" portrait along with its twin from their collection after they met for the Making Art in Tudor Britain research project though it won't categorically agree on who they were painted by.

        For all Shakespeare's parody in the final act of A Midsummer Night's Dream, Ovid's story of Pyramus and Thisbe was a popular subject in the 16th and 17th century, especially amongst painters and in room three we find Gaspard Dughet's version, Landscape with Pyramus and Thisbe.  It's the moment when Thisbe discovers the dead body of her lover Pyramus sealing their mutual suicide, just the moment when the best productions of Shakespeare's versions allow the actors playing Flute and Bottom, Thisbe and Pyramus to drop the comedy and play the emotion for real, confronting the audience with the reality, sticking the metaphoric knife into us, as well as each other on the stage within a stage.

        Arguably the most important or at least famous Shakespeare painting in the collection, Hogarth's portrait of David Garrick as Richard III (in room five) dispenses with the audience altogether.  Rather than depicting the actor on stage, the artist chooses to place him within a war torn landscape as though he's part of history.  Nathaniel Dance-Holland would utilise a similar approach later and although in his version Garrick brandishes his sword aloft, Hogarth has the moment of greater drama as Richard awakens from his nightmares of being visited by the ghosts of his victims which must have been en electric moment on stage.  Note this is the sort of painting which has its own wikipedia page.

        Into room six and the thick of the pre-Raphaelites and their successors.  Emma Sandys's Viola by contrast to the Hogarth doesn't faithfully depict a moment from Twelfth Night.  The frame has the moment when the Duke Orsino question's Viola about Olivia, ("And what's her history?" "A blank, my lord. She never told her love") with its double meaning as Viola talks about the concealment of feelings in which she's really talking about herself and Sandys chooses to portray this as the character showing her true feminine self rather than the boys clothes she would otherwise be wearing during that scene as directed in the text.

        Finally, Arthur Hughes's As You Like It is a painting I'm already very familiar with.  Having seen it during a visit during my school days, it's the version of the characters that flashed through my mind when I first listened to the play from a vinyl copy of the British Council productions released by Argo borrowed from the Central Library and I now have the postcard on the wall above my desk.  It's a tableau, various scenes from the play against one another and although I now prefer the more realistic landscape in John Everett Millais's Rosalind in the Forest displayed nearby (its an age thing), there's no denying the romance of the Hughes painting and I can see why my young heart leapt.

        Usually in these posts I mention some anecdote about the visit, something else which happened.  Well, the lock on the cubicle in the men's toilet doesn't work so I did have someone pay me an embarrassed visit ("Ooh oh, I'm sorry, um ...") which I mentioned to an attendant and there was an "out of order" sign when I returned.  Oh and the air conditioning machines which have appeared in some of the rooms are amazingly loud though I listened to music all the way round (Priesner as usual) so that was pretty fine.  But like this is really just me wanting to continue writing so that the project doesn't end.  When really it's about time for the project to end.  Here.  For now.


        Who Got Fantasy in My Science Fiction? by Charlie Stross

        Not too long ago, someone in the twittersphere asked, "Whatever happened to psi? It used to be all the rage in science fiction."

        The answer, essentially, was that John Campbell died and nobody believes in that crap any more. And anyway, it's fantasy.

        Now here's the thing. If you accept Clarke's Third Law, which boils down in the common wisdom to "Any sufficiently advanced technology is indistinguishable from magic," you kind of have to ask, "Do we believe psi is crap because it really is crap, or do we just not have the technology to detect or manipulate it?"

        Yes, of course, that way lies madness. But with quantum physicists messing around with teleportation, and computer engineers inching toward a technological form of telepathy, are we really that far off from making at least part of the Campbellian weirdness a reality?

        And if that's the case, where did the psi go? It's no more improbable than the ftl drive that's a staple of the space-opera canon. Why is ftl still a thing, but psi is now subsumed under "Magic, Fantasy, Tropes of"?

        Maybe because science fiction is about the hardware, and fantasy is about the wetware? Faster-than-light travel may be presumed to need some form of machine to happen. Psi, by contrast, is an organic phenomenon. Generally it's considered to originate from some form of human or alien (or, since it's fantasy now, magical or elven or similarly fantasy-focused creature) brain.

        When I was a very young writer, a baby for a fact, I sent my first novel--all 987 space-and-a-half 10-point-typed pages of it--to the late, great Lester del Rey. He sent it back with a three-page letter, kindly and reasonably rejecting it, but encouraging me to keep writing, because There Was Hope. The line I remember most clearly from that letter was the one that defined his main reason for passing on the submission: "Fantasy readers seem to be tolerant of science fiction in their fantasy, but science-fiction readers will not stand for fantasy in their science fiction."

        This was when Anne McCaffrey's dragons were still mostly considered science fiction, because alien planet and genetic engineering and John Campbell, and Darkover was in full swing and Andre Norton was mixing hardcore nastytech with her Witches. But the lines were already hardening, and the categories were just beginning to set in cement--not least through the efforts of the Del Reys, who were just getting rolling with the fantasy boom of the Eighties. By the time the Nineties rolled in, McCaffrey was fantasy because dragons, and Bradley and Norton were in the middle somewhere but "Boys Write SF, Girls Write Fantasy," and Bradley had done The Mists of Avalon, so there we all were. With Fantasy now a major category of its own, and Science Fiction sticking to its own shelves in the bookstores.

        It's interesting that even while the categories separated for ever and aye, or at least until the Publishing Apocalypse changed everything, the writers stood up and said, "HEY! We still want to be together!" And the Science Fiction Writers of America became the Science Fiction and Fantasy Writers of America, and fantasy started getting nominated for Nebulas (and wasn't that a tempest in the tiny teapot), though science fiction couldn't (and still can't) be nominated for the World Fantasy Award. But horror can, and is, so there's some cross-fertilization there, too.

        What happened here was that what used to be all one column had become, for marketing purposes, Column A and Column B. Column A: Future, technology (usually high), time travel (if by mechanical means), alien planets, space travel, and so on. Column B: Past or secondary worlds, low tech (or at least lower than the present day, though there's also urban fantasy, which hits most of the other checkboxes), dragons and elves and other mythical beings, time travel or portal travel (if by magical/nontechnological means), magic--and, as a subset thereof, the mind powers known in earlier science fiction as psi, etc., etc.

        So Pern's lost Earth colony was labeled fantasy, between the dragons and the psi. Darkover? Um, yeah. Fantasy. Low tech (albeit voluntary) and psi, despite the central theme of conflict between high and low tech in a spacefaring future. (And yet Dune is still science fiction in spite of the psi and the weirdness. Higher ratio of spacefaring culture to low-tech planet? Post-technological vibe? Male author?)

        What this did to younger writers was lock in the categories and make it difficult to impossible to sell work that crossed the lines. Female writers were pushed to heighten the romance and emphasize the fantasy elements, and many were actively discouraged from venturing into science fiction. The freewheeling nature of the old, smaller, still evolving field had both hugely expanded in numbers and sales reach, and distinctly contracted in the range of what was allowable in worldbuilding and storytelling. Categories solidified, and to some extent ossified.

        I wonder if Steampunk is in some ways a reaction to this. The closer modern technology gets to Clarke's threshold, the more alluring it can be to focus on gears and levers and automata. They're accessible; they don't spin off into quantum bizarrerie. And god forbid, they don't disappear into the Singularity.

        Still, there's psi on the fantasy side of the divide, with the "MAGIC" label slapped over it. The author who gets psi in her space adventure (notwithstanding the Force or the Betazoids) may meet with fastidious flinching and "we can't sell this." The categories are firm, and while there's "interstitial" and "intergenre," those are narrowly defined and equally specific. You can have a science-fiction mystery or a cyberpunk space opera, but a trope from fantasy Column B in your science-fiction Column A? Not so much. Especially if it also mixes up the age groups (YA? Adult? Both? Neither?).

        The ebook boom and the rise of independent publishing--now well on its way to respectability--has been a serious game-changer for authors who can't or won't color correctly inside the lines. Marketing categories still prevail, but there's much more choice and far fewer restrictions. If even a few readers will read it, pretty much anything goes. Even science fiction with fantasy cooties. Or technology that's crossed the line into magic. Or psi powers. With or without the help of technology.

        So maybe psi in science fiction will come back. We're seeing so many different variations on the genre now, and so much exuberance, and a good amount of crossing over and some work that isn't even categorizable (Martha Wells' Raksura, anyone?). Why not a new vogue for mind powers in our science-fictional worlds?


        Subscriptions (feed of everything)


        Updated using Planet on 26 May 2015, 05:48 AM