Francis’s news feed

This combines together on one page various news websites and diaries which I like to read.

January 29, 2015

Wind trace interrupt hiccups by Goatchurch

As I suspected, this is where all my grand designs go completely wrong. Recall that I proudly got the wind speed probe to convert each passing of the fan blade into a square wave voltage spike earlier this month.

Well, the reading of this signal was to be done with the following code:

volatile int windcount = 0; 
volatile int microtimewind = 0; 

void windinterrupt() 
    long microtimestamp = micros(); 
    microtimewind = microtimestamp - lastmicrotimestamp; 
    lastmicrotimestamp = microtimestamp; 

void setup()
    attachInterrupt(windpin, windinterrupt, RISING);

In other words, I’d capture the exact number of microseconds when each blade of the wind sensor fan went past the proximity detector.

I saved the values to the SD card and then plotted them as I blew air through the detector and spun the fan. And I got all these horrible spikes where the microsecond time went down to zero:


Of course I thought this was a hardware issue, some noise in the amplifier or something, so I wrote some spike detection code to find out how frequently this was happening.

I also added some buffering on the microtimewind value so that I could capture every single interrupt.

Here is a series of values annotated from that logfile:

microsecond     microsecond     buffer 
timestamp       interrupt gap   placevalue
 7242251,         2311,           600 
 7244505,         2255,           601 
 7246746,         2241,           602 
 7249025,         2279,           603 [buffering starts here]
 7249025,         2262,           603 
 7249025,         2226,           603 
 7249025,         1,              603 [interrupt queuing starts here]
 7249025,         2,              603 
 7249025,         2,              603 
 7249025,         2,              603 
 7249025,         2,              603 
 7249025,         1,              603 
 7249025,         3,              603           
 7249025,         1,              603 
 7255705,         2178,           614 [buffer cleared]
 7257875,         2169,           615 
 7260086,         2212,           616

As you can see, there’s an event (probably to do with the SD card) which holds back the main loop, and we get 3 good readings when the values begin to be buffered. But then the windpin RISING interrupts (which gets called with each passing fan-blade) starts to get queued.

When whatever is blocking the interrupt function gets unblocked, eight calls are made to windinterrupt() in quick succession (less than 3 microseconds apart).

Well, that was my first impression, until I calculated how many readings should have been in that gap, and my answer is none, because 7249025+2279+2262+2226=7255792, which is close to the reading present when the buffer was cleared.

These extra interrupt calls might be happening, or I might have a bug in my buffering code (I doubt it as the buffer is 12 items long, so would tend to have things of that size instead).

At any rate, it’s completely screwed up. When I enable my spike detector code, it finds no spikes when the SD card is disabled.

It finds spikes when I write the numbers to the OLED display, which is also on the SPI bus.

I get no spikes when I run an I2C device, like the Gyro/accelerometer.

So the SPI is the problem. And this explains why my barometer readings (which communicate through timed bit-banging across an opto-isolator) are also getting the occasional scrambled numbers.

I need the SPI devices for human input/output and datalogging.

What a bummer.

I’m fresh out of ideas for how to transmit this data for the minute.

It’s a shame, because I was looking forward to doing something interesting with this wind meter today.

Here’s a zoomed-in view of the readings. (Ignore the spikes)


If you look carefully, you can see a 6-sequence cycle for the 6 different blades on the fan. As the blades don’t have identical electrostatic characteristics, some of them appear wider and earlier to the sensor than they should. It’s like they are not quite evenly spaced.

I was going to spend today writing the code to measure their ratios in order to factor out this wobble and get a smooth curve. But now there doesn’t seem much point. On to something else.

Two Winds by Astrobites

Short gamma ray bursts are rapidly flickering flashes of energetic photons (gamma rays) lasting about a second. The total energy released in photons by one of these creatures (1049 erg) is only a little less than that of a supernova (1051erg), and supernovas last weeks! Often short gamma ray bursts are followed by steady afterglows of lower-energy photons (x-rays) lasting up to a few hours. There’s a good working model for what’s causing them: extremely rapid black hole accretion powering a jet. But the standard model’s got a problem: it can’t explain the x-ray afterglows. The authors of today’s paper present a solution to the x-ray problem. Since their work builds on the standard model let’s start there.

The Standard Model

Standard model for gamma ray bursts

Figure 1: the standard model for a short gamma ray burst. 1) Two neutron stars merge (not in the diagram), 2) the super heavy neutron star collapses to a black hole (lower left corner), 3) the black hole (red squiggly along the time axis) accretes the remaining matter (maroon wedge in the lower left corner), 4) a jet is launched (cyan diagonal swath), 5) gamma rays are emitted when the jet slams into the interstellar medium. (Figure from me.)

The engine of the standard model is a black hole accreting a lot of matter very, very fast: about a solar mass in one second. This sounds crazy, but black hole accretion is very efficient at liberating mass energy; it’s a good way to get 1049erg out of a stellar mass system over 1 second. This rapid accretion launches a jet of matter at more than 0.999 times the speed of light. Okay, that sounds crazy too, but these extremely relativistic speeds are needed to explain the rapid flickering of the gamma rays. Far away from the central engine, the jet slams into some dense gas, maybe the interstellar medium. This shock produces the gamma rays we see from Earth.

I’ve drawn a spacetime diagram of the model in Fig 1. Recall, objects moving a lot slower than light (like the observer on the far right of the diagram) trace vertical paths in spacetime diagrams; light rays trace diagonal paths.

A double neutron star merger could potentially do all of this. When Nature puts two neutron stars together you might expect a really large neutron star. But neutron stars much larger than 2 solar masses (most isolated ones are about 1.5 solar masses) can’t support themselves: they collapse to form black holes. A black hole – neutron star merger could also do all of this, but these types of mergers are probably much rarer.

The Standard Model + A Supramassive Neutron Star

Gamma ray burst model with an x-ray emitting supramassive neutron star.

Figure 2: the standard model for a short gamma ray burst, including a supramassive neutron star remnant. 1) Two neutron stars merge (not shown in the diagram), 2) the long-lasting supramassive neutron star (wiggly red blob at the lower left) emits x-rays (cyan diagonal squigglies), 3) when its rotation slows down too much, it collapses to a black hole, 4) the black hole (red squiggly along the time axis) accretes the remaining matter (maroon wedge in the middle of the time axis), 5) a jet is launched (cyan diagonal swath), 6) gamma rays are emitted when the jet slams into the interstellar medium. But in this model, the observer sees the x-rays before the gamma rays! (Figure from me.)

Okay, black hole formation leading to rapid accretion leading to a jet leading to gamma rays. So how do we get x-rays out of this thing? Here’s a promising hint: “really massive neutron stars can’t support their own gravity.” “Can’t” as in “not even for a little while”? Well, actually, if they’re spinning super fast, they can survive a little while. In a neutron star merger, this is exactly what happens. The final orbits of the two stars are extremely fast, and this orbital motion carries over into the rapid spin of the merged remnant. (Here’s a movie from Stephan Rosswog.) This wobbling, throbbing, spinning blob is called a supramasive neutron star, because if it weren’t spinning it would collapse to a black hole.

Furthermore, the churning and mixing of the supramasive neutron star will amplify its magnetic field so that it begins to emit light like a pulsar, only much higher energy: x-ray light, actually. Great! So this is where the x-rays come from.

Not so fast; the timing is wrong! The x-rays we’ve just described would actually precede the burst of gamma rays. Here, I’ve added the supramassive neutron star in Fig 2. The observer doesn’t see any x-rays after gamma rays. And we know this is wrong, we’ve seen lots of x-ray afterglows.

The Two Winds Model

Spacetime diagram of the two-winds model.

Figure 3: the two-winds model. 1) Two neutron stars merge (not shown in the diagram), 2) the supramassive neutron star (wiggly red blob at the lower left) rotates faster on the inside than on the outside causing it to heat, and it emits a slow, dense wind (light orange wobbly diagonal swath), 3) when it begins to rotate uniformly, the magnetic field built up by the earlier differential rotation drives a fast, diffuse wind (green wobbly diagonal swath), 4) when its rotation slows down too much, it collapses to a black hole, 5) the black hole (red squiggly along the time axis) accretes the remaining matter (maroon wedge toward the top of the time axis), 6) a jet is launched (cyan diagonal swath), 7) gamma rays are emitted when the jet slams into the interstellar medium, 8) x-rays are emitted when they diffuse through the slow wind, or when the shock due to the fast wind breaks through the slow wind. (Figure from Rezzolla and Kumar.)

This brings us to the new model. Rezzolla and Kumar have discovered a simple explanation for x-rays after gamma rays by turning their attention away from the central engine. Their model uses all the same bits and pieces we’ve already examined, but it also introduces faraway winds. Their knowledge of winds is based on computer simulations of supramassive neutron stars.

When the supramassive neutron star forms, right after merger, its inside is rotating faster than its outside. This differential rotation heats the supramassive neutron star by friction. While the differential rotation lasts (about 10 seconds), the hot star blows off a shell of slow, dense wind. After about 10 seconds, the supramassive neutron star begins rotating uniformly, and again emits x-rays and a fast, diffuse wind. The fast wind is driven by the same mechanism that emits x-rays, a spinning magnetic field.

When the fast wind slams into the shell of slow wind, it shocks and generates x-rays. But the x-rays are trapped. The slow wind is so dense that it’s opaque to x-rays. Just like photons generated in the Sun, the x-rays scatter and diffuse slowly through the slow wind, slower than the speed of light, slower than the shock wave that produces them. Much of the x-ray light reaches the outer surface, where it can stream freely toward Earth after the jet breaks through and forms the gamma ray burst.

In Fig 3, to the right, Rezzolla and Kumar present their model just like our spacetime diagrams above. Notice that in this model, the gamma rays are formed when the jet shocks into the slow wind, not when it shocks into the interstellar medium.

One key observation could strengthen Rezzolla and Kumar’s two winds model: an x-ray precursor to a short gamma ray burst. The spacetime diagram for their model reveals how x-rays could be visible after and before the main burst. In fact, such precursors have already been seen! But don’t get hasty; there are other ways for neutron star mergers to emit x-rays before they merge (for example shattering their crusts).

Commercial Crew Rivalries: Fun to Watch, Everybody Wins by The Planetary Society

Now that Boeing and SpaceX have won the high-profile privilege of carrying astronauts to the ISS, they must start making public appearances as reluctant equals.

January 28, 2015

My Favourite Film of 2011. by Feeling Listless

Film Here's some background as to how this list has been compiled.

Firstly I sat at the computer and typed up a list of all the films I decided I really quite liked, off the top of my head, as many as I could, as many as I could remember, and these were then transferred into an Access table and afterwards I went through and added released dates as they appear on the IMDb.

 Admittedly some of those dates won't match the year I saw them but films rarely do anymore, so this doesn't really matter.

After that I went through and took a long hard look at the films listed in the same year and chose one.

Really difficult.

Then I went through and added in all the dates and I didn't have anything listed for then.

Then I went through the annual lists on this blog to fill in the gaps on the assumption that if I liked it enough to mention it then, I must have liked it quite a lot.

Hence Chalet Girl.

I won my blu-ray copy through an online competition from In The Snow Magazine.  Yes, indeed.

Searching for the title bar image, I find now that there was a production blog and that begins here.

Here's what I said at the end of 2011:

Chalet Girl was this year’s secret classic and I suspect the teenage version of me would have judged it the year’s best (which makes this choice about nostalgia for the person I once was). It’s essentially a British take on the Mary-Kate and Ashley cultural tourism series, but throughout it explodes expectations by making the bitchy blonde rival the best friend, putting the handsome suitor at the epicentre of a discussion on class politics and hiring Bill Bailey to play an emotionally crippled Dad. But the key success is Felicity Jones as the eponymous service worker who uncannily appropriates in her tiny form some of Katherine Hepburn’s verve, timing and just general weirdness, taking full advantage of a script which is drenched in buckets full of cynicism and still able to look just plain cute in a ski coat against the snow. it's just a shame the typically mishandled advertising campaign and critical reaction put everyone else off.

I can't really improve on that other than to also notice just how many Doctor Who related actors appear.  Like Broadchurch, it would probably be quicker to list who hasn't been in Doctor Who.

Frozen Rock. by Feeling Listless

Film The instrumental version of Fixer Upper from the ULTIMATE EDITION soundtrack to the film Frozen:

Is Fraggle Rock isn't it?

Community Ownership? by Albert Wenger

I am very excited about what we are doing over at with the Topic of the Week. Last week we discussed undoing incumbent marketplaces and this week it is about community ownership of applications.

We wanted to be an extension of the discussions that we have internally and this new approach feels like we are getting closer. Instead of a loose collection of links the topic approach gives the conversation more shape and direction.

So if you are interested in the question of who should own the networks that are so important to our lives and how they should be governed, head on over and join the discussion.

Shedding Light on Galaxy Formation by Astrobites

Authors: Joakim Rosdahl, Joop Schaye, Romain Teyssier, Oscar Agertz

First Author’s Institution: Leiden Observatory, Leiden University, Leiden, The Netherlands

Paper Status: Submitted to MNRAS

Computational simulations have proven invaluable in understanding the formation and evolution of galaxies. When the first galaxies were made in simulations, they formed… too well. Gas cooled too much and too fast, and these galaxies formed way too many stars. These first simulations, however, missed a whole host of physics that today fall under the umbrella of “feedback” processes. Feedback encompasses a wide range of really interesting astrophysics, including radiation from stars heating and ionizing surrounding gas, thermal and kinetic energy injection from supernova explosions,  heating from active galactic nuclei (AGN), and the impact of AGN jets. Among other things, these processes can drive galactic winds, blowing gas out of galaxies, and slowing down star formation.

Including every type of feedback in a simulation is a great way to produce realistic galaxies, but is unfortunately computationally expensive and impossible to do perfectly. In fact, much of the relevant physics occurs on scales far smaller than the simulation resolution, and must be addressed with what is called “sub-grid” models. Today, the game of producing realistic galaxies in simulations boils down to figuring out the right physics to include, while minimizing computational costs. Much progress has been made in this field, with one giant exception: radiation. Properly accounting for radiation is expensive and complex; the nearly universal solution is to make assumptions about how photons propagate through gas, so it doesn’t have to be computed directly. The authors of this paper present the first galaxy-scale simulations of “radiation hydrodynamics”, or hydrodynamic simulations that directly compute the radiative transfer of photons, and their feedback onto the galaxy.

Feedback, Feedback, Feedback

The authors produce galaxy simulations using RAMSES-RT, an adaptive mesh refinement (AMR) code that includes a nearly first-principles treatment of radiation, based upon the RAMSES code. Their treatment of radiation breaks photons into five energy bins, infrared, optical, and 3 ultraviolet bins separated by hydrogen and helium ionization energies. These act on the gas through three primary physical processes; the ionization and heating of the gas through interactions with hydrogen and helium, momentum transfer between the photons and the gas (radiation pressure), and pressure trough interactions with dust (including the effects of light scattering off of dust).  In addition, they include prescriptions for star formation and supernova feedback, radiative heating and cooling, and chemistry to accurately track the abundances of hydrogen and helium and their ionization states. The photons are produced every timestep in the simulation from “star particles” (representation of groups of stars in the simulated galaxy); the number of photons and their energies are determined by the given star particle’s mass and size.


Fig.1: The radiation flux from the G9 galaxy for all five photon energy bins. Shown is the face-on (top) and edge-on (bottom) views of the galaxy. (Source: Fig. 1 of Rosdahl et. al. 2015)



Fig. 2: Table of each type of simulation run, showing which feedback types were included in each. These are (in order) supernova feedback, heating from radiation, momentum transfer between photons and gas (radiation pressure), and radiation pressure on dust (Source: Table 3 of Rosdahl et. al. 2015)

The authors include all of this physics into simulations of 3 disk galaxies (labelled G8, G9, G10), each containing roughly 108, 109, and 1010 solar masses of gas + stars, embedded in dark matter haloes of about 1010, 1011, and 1012 solar masses. The heaviest of these three is comparable to the Milky Way. Fig. 1 above shows face-on and edge on views of radiation flux from the G9 galaxy in all 5 photon energy bins. In each of their simulations, they evolve each galaxy for 500 Myr, examining how turning on / off various feedback processes (namely supernova, radiation heating, and radiation pressure) affect the evolution of each galaxy. Fig. 2 gives the combination of physics in each simulation type and their labels.

Star Formation and Galactic Winds

Although they produce a thorough investigation into many of the details of their radiation feedback, this astrobite will focus on only two effects: how the radiation affects the formation of stars, and its effect on driving galactic winds. Fig. 3 presents the total star formation (in stellar masses) and star formation rate for the G9 galaxy under 6 different simulations. The labels in Fig. 3 are given in Fig. 2. The dashed lines give the mass outflow rate from the galactic winds as measured outside the galaxy. On one extreme, the simulation with no feedback converts gas into stars too efficiently, and drives no galactic winds. On the other, the full feedback simulation (dark red) produces the least amount of stars, but interestingly, has weaker galactic winds than some of the other simulations. The three thick lines in the top of Fig. 3 give the supernova + radiation feedback simulations. Compared to the supernova only simulation (blue), the radiative heating feedback provides the dominant change, while including radiation pressure and dust pressure only make small changes to the total star formation.


Fig. 3: For galaxy G9, shown is the total mass of stars (top), the star formation rate (bottom), and the galactic wind outflow rate (bottom, dashed) for each of the simulations listed in Fig. 2. (Source: FIg. 4 of Rosdahl et. al. 2015)



Fig. 4: Galactic winds from the G9 galaxy with supernova feedback only (left) and with supernova and radiation feedback (right). The images show the surface (column) density of hydrogen. (Source: Fig. 5 of Rosdahl et. al. 2015)

Fig. 4 shows the outflows of the G9 galaxy at the end of the simulation run, with SN feedback only on the left, and full feedback on the right. Although the two are morphologically quite different, the authors show that the differences in total mass loss from galactic winds between the two simulations is minimal (see Fig. 3). In fact, they show that the full radiation feedback model produces slightly less winds, a byproduct of slowing down star formation in the galaxy.

The Future of Galaxy Evolution

The authors have shown that radiative feedback does play an important role in studying galaxy formation and evolution. In this work, they sought to characterize the effects of supernova and radiative feedback vs. supernova feedback alone.  In future work, the study of radiation feedback on various scales, from small slices of the galactic disk to larger galaxies, and the inclusion of AGN feedback in these simulations, will be important in piecing together a complete understanding of galaxy formation.

Ceres: Just a little bit closer (and officially better than Hubble) by The Planetary Society

Last week's Dawn images of Ceres were just slightly less detailed than Hubble's best. This week's are just slightly better.

A second ringed centaur? Centaurs with rings could be common by The Planetary Society

Chiron, which is both a centaur and a comet, may also have rings.

January 27, 2015

On the road again! by Charlie Stross

I'm off to New York on Thursday, weather permitting, and won't be back until late in February. I'm in the US for business, and while I'm there I'll also be appearing at Boskone 52 in Boston from February 13-15; you can find me on the program schedule here.

I'll also be hanging out and drinking beer from 6pm on next Monday, the 2nd, in Pine Box Rock Shop in Brooklyn. It's really close to the Morgan Ave L stop (opposite side of the block), has good beer and spirits, and can feed vegans (not me: my wife). You can find it on Google Maps as Pine Box Rock Shop, 12 Grattan Street, Brooklyn, NY 11206, United States. If you're reading this, you're welcome to come along. (I'm told there's a facebook page for the event here. NB: I don't do Facebook.)

While I'm away I'm handing the blog over to an ensemble of all-star SF/F writers. We'll have Harry Connolly, Laura Anne Gilman, Elizabeth Bear, and the collaborating duo of Sherwood Smith and Rachel Manija Brown.

Sherwood Smith studied in Austria, finally earning a (useless) master's in history—she's been a governess, a bartender, and wore various hats in the film industry before turning to teaching for 20 years. She began her publishing career in 1986. To date she's published over forty books, nominated for several awards, including the Nebula, the Mythopoeic Fantasy Award, and an Anne Lindbergh Honor Book.

Harry Connolly's debut novel, Child of Fire, was listed to Publishers Weekly's Best 100 Books of 2009. He turned to Kickstarter to fund the publication of his epic fantasy trilogy The Great Way and became the ninth most funded project in the Fiction category. He lives in Seattle with his beloved wife, beloved son, and beloved library system.

Recanting a life of regular paychecks, Laura Anne Gilman left her full-time gig in as an editor in 2003. Since then, she has written the popular Cosa Nostradamus urban fantasy series (most recently Promises to Keep), and the Nebula award-nominated Vineart War trilogy. Her next fantasy novel, Silver on the Road, will be published by Saga Press/Simon & Schuster in 2015. She still keeps a freelance hand in on the editing gig, though, because red ink is addictive.

Elizabeth Bear was born on the same day as Frodo and Bilbo Baggins, but in a different year.When coupled with a childhood tendency to read the dictionary for fun, this led her inevitably to penury, intransigence, and the writing of speculative fiction. She is the Hugo, Sturgeon, Locus, and Campbell Award winning author of 27 novels (The most recent is Karen Memory, a Weird West adventure from Tor) and over a hundred short stories. Her dog lives in Massachusetts; her partner, writer Scott Lynch, lives in Wisconsin. She spends a lot of time on planes.

And between them, they're going to keep the blog busy for the next month!

Oh Fant4stic. by Feeling Listless

Film Fantastic Four trailer, oh sorry Fant4stic trailer (eye roll), in "not as shit as we were expecting" shocker. It's not Chronicle 2 apparently, with little evidence of a "found footage" aesthetic but about the only thing which would indicate this is a FF film is The Thing. There's a clear attempt to keep this as separate as possible in tone to the MCU but one of the elements the original two rubbish attempts got right were the aesthetic (mostly) and the humour (almost).  But as we always remind ourselves, The Phantom Menace trailer made that look like the best film ever so its entirely possible this could be false advertising.  Still more interested in the Ant-Man film though.

Winston Churchill’s funeral in HD on BBC Parliament. by Feeling Listless

TV This Friday, BBC Parliament will be broadcasting the state funeral of Churchill from the original archive film masters. At the About the BBC blog, James Rowland, Senior Media Manager, BBC Archives describes the restoration process:

"While planning the project the BBC Archives team discovered that a section from the funeral footage, reportedly featuring two buglers inside St Paul’s Cathedral playing The Last Post followed by Reveille, was missing from a transfer that had been made from the print of the original film many years ago. When the team checked the original film they found, to their relief, the missing section of approximately 3 minutes and 35 seconds was safely preserved on the original film. Up until the Eighties it was relatively common, although not permitted, for users of the Archive to physically cut sections out of the film to use in their programmes with urgent deadlines such as current affairs and we think that’s what may have occurred with the print of the funeral which was later transferred to tape and added to the archive holdings."
BBC Parliament isn't in HD on Freeview yet, but fortunately this will be on the iPlayer afterwards which should be.

Undoing Incumbent Marketplaces: Lowering the Take Rate by Albert Wenger

A big part of our investing at Union Square Ventures is to find businesses that have network effects. Marketplaces fit with that because participating in a marketplace tends to become more valuable as the marketplace grows (that’s assuming the marketplace does a good job spreading liquidity among participants). The reason to be interested in network effects from an investor perspective is that they offer a source of defensible competitive advantage. That in return implies that a company that operates a network can obtain some level of economics from the network. In the case of marketplaces that is measured most directly by the take rate. The take rate is the company’s revenues expressed as a percentage of the total volume of activity in the marketplace.

I have written about take rate before here on Continuations. I am revisiting this because last week’s topic of the week at was “What will undo today’s incumbent marketplaces?" I think one answer is marketplaces with a lower take rate. I don’t see take rates of 20% or more as sustainable in the long run. Why? Because they (a) impose too heavy a tax on marketplace activity and (b) produce too much profitability for the marketplace operator. The first is bad from the perspective of efficiency as it will crowd out some transactions that would benefit from taking place in the market. The second is will provide the incentive for competing marketplaces to be created.

The process of unseating an incumbent with a lower take rate is, however, long and arduous. The network effects of the existing marketplace make it tough for a challenger to build momentum. And the challenger may need other forms of differentiation in addition to a lower take rate. Or better yet it will have to grow initially in an area that is not at all (or only very poorly) served by the incumbent. The growth of Etsy is a case in point. eBay had tried to grow their take rate over time — largely by raising listing fees — and increasingly focused on power sellers of commodity items. That left open the opportunity to build a new marketplace with a lower take rate and focused on lower volume arts and crafts sellers. Over time this has happened to eBay in a number of different markets and has put pressure on their take rate.

More recently there are startups including OfferUp and VarageSale that are going after eBay and Craigslist with a natively mobile strategy. For now they are both free to list and buy so their take rate — like that of Craigslist in these categories — is zero. That is of course the lowest possibility but then the operation of the marketplace needs to be subsidized. Craigslist does this by charging for other parts of their business (real estate and job classifieds in select cities only). It will be interesting to see how this segment plays itself out over time. A first step to establishing a take rate for the newcomers will likely be to close the payments loop. From there though it is a lot less clear where to go and how much to charge. I suspect that the winning strategy will be to keep the take rate as small as possible while still building a sustainable business.

Evryscope, Greek for “wide-seeing” by Astrobites

Authors: Nicholas M. Law et al.

First Author’s Institution: University of North Carolina at Chapel Hill

How fantastic would it be to image the entire sky, every few minutes, every night, for a series of years? The science cases for such surveys —in today’s paper they are called All-Sky Gigapixel Scale Surveys— are numerous, and span a huge range of astronomical topics. Just to begin with, such surveys could: detect transiting giant planets, sample Gamma Ray Bursts and nearby Supernovae, and a wealth of other rare and/or unexpected transient events that are further described in the paper.

Evryscope is a telescope that sets out to take such a minute-by-minute movie of the sky accessible to it. It is designed as an array of extremely wide-angle telescopes, contrasting the traditional meaning of the word “tele-scope” (Greek for “far-seeing”) by Evryscope’s emphasis on extremely wide angles (“Evryscope” is Greek for “wide-seeing”). The array is currently being constructed by the authors at the University of North Carolina at Chapel Hill, and is scheduled to be deployed at the Cerro Tololo Inter-American Observatory (CTIO) in Chile later this year.

But wait, aren’t there large sky surveys out there that are already patrolling the sky a few times a week? Yes, there are! But a bit differently. There is for example the tremendously successful Sloan Digital Sky Survey (SDSS— see Figure 1, and read more about SDSS-related Astrobites here, here, here), which has paved the way for numerous other surveys such as Pan-STARRS, and the upcoming Large Synoptic Survey Telescope (LSST). These surveys are all designed around a similar concept: they utilize a single large-aperture telescope that repeatedly observes few-degree-wide fields to achieve deep imaging. Then the observations are tiled together to cover large parts of the sky several times a week.


Figure 1: The Sloan Digital Sky Survey Telescope, a 2.5m telescope that surveys large areas of the available sky a few times a week. The Evryscope-survey concept is a bit different, valuing the continuous coverage of almost the whole available sky over being able to see faint far-away objects. Image from the SDSS homepage.

The authors of today’s paper note that surveys like the SDSS are largely optimized to finding day-or-longer-type events such as supernovae —and are extremely good at that— but are not, however, sensitive to the very diverse class of even shorter-timescale transient events (remembering the list of example science cases above). Up until now, such short-timescale events have generally been studied with individual small telescopes staring at single, limited, fields of view. Expanding on this idea, the authors then propose the Evryscope as an array of small telescopes arranged so that together they can survey the whole available sky minute-by-minute. In contrast to SDSS-like surveys, an Evryscope-like survey will not be able to detect targets nearly as faint as SDSS-like surveys can, but rather focuses on the continuous monitoring of the brightest objects it can see.


Figure 2: The currently under-construction Evryscope, showing the 1.8m diameter custom-molded dome. The dome houses 27 individual 61mm aperture telescopes, each of which have their own CCD detector. Figure 1 from the paper.

Evryscope: A further description

Evryscope is designed as an array of 27 61mm optical telescopes, arranged in a custom-molded fiberglass dome, which is mounted on an off-the-shelf German Equatorial mount (see Figure 2). Each telescope has its own 29 MPix CCD detector, adding up to a total detector size of 0.78GPix! The authors refer to Evryscope’s observing strategy as a “ratcheting survey”, as it goes like this: the dome follows the instantaneous field of view (see Figure 3, left) by rotating the dome ever so slowly to compensate for Earth’s rotation rate, taking 2 minute exposures back-to-back for two hours, and then reset and repeat (see Figure 3, right). This ratcheting approach enables Evryscope to image essentially every part of the visible sky for at least 2 hours every night!


Figure 3: Evryscope sky coverage (blue), for a mid-latitude Northern-hemisphere site (30°N), showing the SDSS DR7 photometric survey (red) for scale. Left: Instantaneous Evryscope coverage (8660 sq. deg.), including the individual camera fields-of-view (skewed boxes). Right: The Evryscope sky coverage over one 10-hour night. The intensity of the blue color corresponds to the length of continuous coverage (between 2 and 10 hours, in steps of 2 hours) provided by the ratcheting survey, covering a total of 18400 sq.deg. every night. Figures 3 (left) and 5 (right) from the paper.

With its Gigapixel-scale detector, Evryscope will gather large amounts of data, amounting to about 100 TB of compressed FITS images per year! The data will be stored and analyzed on site. The pipeline will be optimized to provide real-time detection of interesting transient events, with rapid retrieval of compressed associated images, allowing for rapid follow-up with other more powerful telescopes. Real-time analysis of the sheer amount of data that Gigapixel-scale systems like Evryscope create would have been largely unfeasible just a few years ago. The rise of consumer digital imaging, ever increasing computing power, and decreasing storage costs have however made the overall cost manageable (a few million dollars much less than one million dollars!) with current technology.


The Evryscope array is scheduled to see first light later this year in CTIO in Chile, where it will start to produce a powerful minute-by-minute data-set on transient events happening in its field of view, which until now have not been feasible to capture. But it won’t see the whole sky, just the sky it can see from its location on Earth. So then, why stop there, why not reuse the design and expand? Indeed, this is what the authors are already thinking —see whitepaper about the “Antarctic-Evryscope” on the group’s website. And who knows, maybe soon after we will have an Evryscope at *evry* major observatory in the world working together to record a continuous movie of the whole sky?

At last! A slew of OSIRIS images shows fascinating landscapes on Rosetta's comet by The Planetary Society

The first results of the Rosetta mission are out in Science magazine. The publication of these papers means that the OSIRIS camera team has finally released a large quantity of closeup images of comet Churyumov-Gerasimenko, taken in August and September of last year. I explain most of them, with help from my notes from December's American Geophysical Union meeting.

It's Official: LightSail Test Flight Scheduled for May 2015 by The Planetary Society

This May, the first of The Planetary Society's two member-funded LightSail spacecraft is slated to hitch a ride to space for a test flight aboard an Atlas V rocket.

January 26, 2015

Oh, please stop ... by Simon Wardley

A simple test.

1) Can you describe your user needs for all your lines of business, processes or systems?

2) Can you describe the components that make up those needs and the connections between the components?

3) Do you have some form of mechanism for visualising and communicating this (e.g. a map)?

4) Do you have a means of comparing maps together to remove duplication in the organisation, to challenge bias that exists within group, to help find unmet needs, to combat inertia and to compare with competitors?

5) Do you treat your components individually and understand that one size fits all methods don't work? Do you apply multiple methods? 

6) Do you understand that the components are evolving due to competition and what works today will not be the same tomorrow? Do you understand any map is dynamic and an imperfect representation of reality? Do you understand the very basics of change - componentisation, creative destruction, punctuated equilibrium etc.

7) Do you attempt to alter your map (e.g. use of constraints, opening a field, blocking competition, etc) and observe the impact of what happens?

8) Do you understand what situational awareness is and why it matters?

UNLESS, the answer to all the above is an emphatic YES then please don't waste your time by sending me emails wanting to discuss disruption, ecosystem, platforms, competitive advantage and STRATEGY. These mean different things to you and me.

Instead, if you need help writing your strategy then I've put in considerable effort and prepared you an individual bespoke strategy already in advance.  Just for you - just click this link - and I'll send you on your merry way. Bon Voyage.

PS. If you don't like the strategy, click the link again and it'll magically create you another at no extra charge.

PPS. I have enough to deal with particularly on my latest research project - the clash of nations. There is no shortcut to mapping, you have to learn it and practice. 

"Le professeur Moriarty chez Sherlock Holmes." by Feeling Listless

Film William Gillette: Five ways he transformed how Sherlock Holmes looks and talks:

"A 1916 silent movie featuring Sherlock Holmes - long presumed lost - is due to have its premiere in Paris. It stars a man who changed the way we see Conan Doyle's famous sleuth forever."
This is the silent which was discovered towards the end of last year. The short clip behind the link is mesmerising and also feels like like we're watching the real Holmes going about his business.

Why is mapping more popular in Government than Commercial companies? by Simon Wardley

I was asked this question and thought I'd better explain some important aspects of mapping

First, mapping is about situational awareness and therefore it helps if people have had some military background or experience. Second, I tend to find mapping is more popular in smaller organisations (founder run) and Governments which both need to think about the longer term and strategic play more carefully.

Mapping does take a bit of work and thinking about. Yes, it can remove duplication, fundamentally change the way you play the game but it's not really suited to large commercial organisation. Why?

Well, if you consider that it can take a couple of years to get really good at mapping, collate all the maps needed to enhance your situational awareness and learn to play the game then despite its effectiveness, the timeframe is too long for many large commercial organisations. Execs don't tend to think long term (i.e. > 3 years) as they only expect to be in their position for a short time. I've met many that actively discourage longer term thinking. They often don't give two hoots about survivability of the organisation beyond their term and are instead focused almost exclusively on short term share price fluctuations. This is what they are motivated by and unlike a founder run organisation, there is no legacy or loyalty here. 

What I've often found is that execs are after a quick fix to boost share price which is why copying others / backward causality / cost cutting are so attractive. The last thing they generally want to do is rock the boat, think about the long term or make significant changes which others aren't doing. The market actually discourages people from doing this.

Mapping on the other hand is more about long term sustainability than short term quick fixes. It lacks the simplicity and speed of a SWOT requiring a more dedicated effort to understand user needs, the environment, evolution and how to improve situational awareness. It's therefore not suitable for all organisations.

Grad students: apply now for ComSciCon 2015! by Astrobites

ComSciCon 2015

ComSciCon 2015 will be the third in the annual series of Communicating Science workshops for graduate students

Applications are now open for the Communicating Science 2015 workshop, to be held in Cambridge, MA on June 18-20th, 2015!

Graduate students at US institutions in astronomy, and all fields of science and engineering, are encouraged to apply. The application will close on March 1st.

It’s been more than two years since we announced the first ComSciCon worshop here on Astrobites. Since then, we’ve received almost 2000 applications from graduate students across the country, and we’ve welcomed about 150 of them to three national and local workshops held in Cambridge, MA. You can read about last year’s workshop to get a sense for the activities and participants at ComSciCon events.

While acceptance to the workshop is competitive, attendance of the workshop is free of charge and travel support will be provided to accepted applicants.

Participants will build the communication skills that scientists and other technical professionals need to express complex ideas to their peers, experts in other fields, and the general public. There will be panel discussions on the following topics:

  • Communicating with Non-Scientific Audiences
  • Science Communication in Popular Culture
  • Communicating as a Science Advocate
  • Multimedia Communication for Scientists
  • Addressing Diversity through Communication

In addition to these discussions, ample time is allotted for interacting with the experts and with attendees from throughout the country to discuss science communication and develop science outreach collaborations. Workshop participants will produce an original piece of science writing and receive feedback from workshop attendees and professional science communicators, including journalists, authors, public policy advocates, educators, and more.

ComSciCon attendees have founded new science communication organizations in collaboration with other students at the event, published more than 25 articles written at the conference in popular publications with national impact, and formed lasting networks with our student alumni and invited experts. Visit the ComSciCon website to learn more about our past workshop programs and participants.

ComSciCon 2014

Group photo at the 2014 ComSciCon workshop

If you can’t make it to the national workshop in June, check to see whether one of our upcoming regional workshops would be a good fit for you.

This workshop is sponsored by Harvard University, the Massachusetts Institute of Technology, University of Colorado Boulder, the American Astronomical Society, the American Association for the Advancement of Science, the American Chemical Society, and Microsoft Research.

January 25, 2015

Ariane Sherine interviews Charlie Brooker. by Feeling Listless

TV Sherine and Brooker are old friends and colleagues so the conversation enters some surprisingly personal areas:

"We met when we were both working in television. I was a jobbing episode writer on £12,000 a year; when Charlie was with me, he was generous to the extent that he wouldn’t even let me pay for bus tickets, and bought me self-help books when I was struggling with anxiety and depression. On meeting him in 2005, I had just escaped an abusive relationship which culminated in violence during pregnancy. Charlie was friendly and kind and safe, and though he alone couldn’t restore my faith in people (that would take years), he made me realise that though there were people in the world who would try to hurt me, there were also those who would try and help."

Happy Burns Night! (from one of them). by Feeling Listless

The Pre-Gap. Etc. by Feeling Listless

Music The Guardian's Jude Rogers lifts the groove on hidden tracks:

"CDs allowed the inclusion of a song in the “pregap” – the space before track one, only accessible by pressing rewind (interestingly, this still cannot be read by computers while uploading tracks today). Ash put their debut single, Jack Names the Planets, in this space on their album 1977; Super Furry Animals put pregap tracks on 1997’s Guerrilla and the 1998 compilation Out Spaced; and Unkle, the James Lavelle/DJ Shadow outfit, inserted a track teeming with samples of their influences on 1998’s influential Psyence Fiction – it would have been difficult to get clearance to use all these pieces."
The wikipedia has a list of albums with tracks hidden in the pregap.

"The truth is much more mundane" by Feeling Listless

About Suw Charman-Anderson debunks five social media myths:

"The idea of the “digital native” is a pervasive one, telling us that young people somehow innately understand technology whilst older people are social media dullards incapable of truly understanding how it works. This idea is nonsense. The truth is much more mundane: Technological capability, interest and access varies as much amongst young people as it does amongst older people. And whilst young tech users may relate to their technology differently, that’s doesn’t mean that they have developed a deeper or more comprehensive understanding than older users."

Celebrated. by Feeling Listless

TV Even with the odd sporadic bit of Shakespeare and the odd piece on BBC Arts and the iPlayer, theatre continues to be a no show on television even in adaptation. Back in 2007 (crumbs) I wrote this opinion piece arguing for more of it to coincide with a Pinter production on More4. This Pinter production:


Monday, February 26, 2007 by Stuart Ian Burns

The speech given by the announcer heralding More4′s presentation of Harold Pinter’s play Celebration couldn’t have been more disheartening. Beforehand, in one of their conversational station idents, star Michael Gambon had described how he loved Pinter’s work because of the deep subtext. That he liked the fact the characters never said what they really meant. That there were “two miles of other thoughts” underneath.

Then the man from More4 delivered his warning: “And you can enjoy those layers of thought right now on More4 as Michael Gambon amongst many others delivers a very thought-provoking and at times challenging script. Expect strong language from the start and throughout for …” In other words, there’s some swearing and stuff. Has commercial broadcasting now reached the point that when something more substantial than the norm is dropped into primetime, it has to effectively warn the viewer that they’re going to need to use their intelligence?

The play took place in a single setting, a restaurant, and for much of its duration cut back and forth between the inhabitants of two tables. In the first Lambert (Michael Gambon) and Julie (Penelope Wilton) were celebrating their wedding anniversary with his brother Matt (James Bolam) and her sister Prue (Julia McKenzie) who were also married to each other. At the second table, Russell (Colin Firth) is discussing his future with his wife Suki (Janie Dee).

Periodically they were interrupted by the staff of the restaurant, the owner Richard (James Fox), his assistant (Sophie Okonedo) and a waiter (Stephen Rea) who appeared to be afflicted with a false memory syndrome which meant that depending upon their topic of conversation, be it TS Elliot or the Hollywood Studio system, he would describe to the patrons the various random famous people grandfather was apparently acquainted with.

The subtext highlighted by Gambon was evident throughout, as for all their airs and graces, the three couples lacked some fundamentals of civilization and used their words sharply as weapons to cut each other to shreds. Though Suki was obviously trying to boost her husband’s confidence, she still found time to tell him he lacked a clear personality. Meanwhile, just as Julie remembered how she and Lambert met on the top of deck of a bus, he undercut her reminiscence by describing a walk he took with a very pretty girl by a stream. Unctuous characters all – the old men were apparently gangsters and Russell was a banker who at one point describes himself as a psychopath.

But even though the dialogue was often very funny, it needed a genial cast to keep the audience interested – and that’s exactly what it got here. Presenting the work with such a stellar line-up gave it a real sense of occasion with Gambon and particularly Firth clearly enjoying the chance to speak lines a cut above the fare they will have become used to recently in film. Rea was touchingly humble, but the best turn was probably from Janie Dee who conveyed a life spent fighting to become something more than a secretary with just a few looks.

The entire production was a perfect demonstration of how well theatre plays can work on television. Filmed on location in the ravishing atmosphere of The Aster Bar & Grill in London, it still retained a certain theatricality in its staging but, possibly because this production originated for television, displayed a rare intimacy. It was somewhat like BBC3′s strange early reality TV experiment Diners, in which the viewer eavesdropped on the chat of the likes of Roland Rivron and Paul Ross while they were having their dinner; here the viewer could almost be sitting at each table possibly getting pissed on the wine.

One of the reasons sometimes given by commercial broadcasters when discussing the appearance of theatre on television is that it’s difficult to know were to put the ad-breaks. Shakespeare is fine because there are acts – but what of the likes of Pinter who generally, as with Celebration, sets everything in one space in a single scene? Potentially the biggest pleasure of this piece was that More4 decided to broadcast it without any interruption, allowing Pinter’s writing to breath. More please, 4.

Celebration is not available to watch on 4od.  For some reason.

January 24, 2015

Carrie Gracie on her first year in China. by Feeling Listless

Journalism China editor on a journey to reveal:

"For a self-confessed 'China obsessive', the role of the BBC's first China editor might have seemed like a dream job for Carrie Gracie. But the former News Channel presenter and Beijing bureau chief didn't exactly rush to apply when the post was first advertised last spring. In fact it took her until autumn 2013 to decide that she couldn't let this opportunity go by."

Default Option by Charlie Stross

It is Saturday January 24th, 2015. Greece is going to the polls tomorrow, in an election triggered by the main centre-right coalition's inability to form a consensus on who the president should be. (The Greek President is elected by members of the Parliament rather than by the public or an electoral college.) It takes place against a background of traumatic externally-imposed austerity that is familiar, in watered-down form, to anyone living in the UK outside of London and the south-east, and to many elsewhere in Europe. And it is looking as if Syriza, the Coalition of the Radical Left, is on course to win an outright majority and form a new non-coalition government.

This is not an insignificant regional event. Events in Greece set a precedent for the next election in Spain, where support for Podemos ("We can") is growing rapidly. It may also provide a precedent for the UK, which is due to undergo a general election this May, and where polling suggests that the once-dominant share of the vote held by the Labour and Conservative Parties (around 97% of votes cast, in 1950) has declined to around 60%, and where hitherto marginal parties (UKIP on the right, the Greens on the Left) are rising towards, or passing, the 10% milestone.

Syriza is a left-wing party, unapologetically opposed to the policies of austerity and IMF imposition of deficit-reduction on the Greek public. They don't want to leave the Euro (to do so would cause, at a minimum, a banking crisis and a worsening of recession), but the widespread pain of austerity has reached the point where the downside of leaving the Euro may be seen as less unpleasant than continuing along the current path. (Nor is austerity without its critics; it's deflationary, damaging to growth, and there is some evidence that it is being chosen as the course out of the 2007/08 crisis by the rich for ideological reasons rather than efficacy—it doesn't harm continued accumulation of capital, but it places a disproportionate burden on the poor.)

Predictably the big political guns throughout the EU have been wheeled out against Syriza, to frighten them into going along with the post-2010 arrangement. But it's looking increasingly likely that the Greek public are about to say, not merely "no," but "hell, no!"

So what happens next? Monday's papers are going to be an interesting read ... as for me, I'm speculating idly if, now that Lenin's not-so-excellent experiment has been dead and buried for a generation and the crisis of capitalism has given us a salutory lesson in the consequences of unbridled greed, we aren't now drifting back towards the realization that it's time to try Socialism 2.0.

Maps and the Target Operating Model by Simon Wardley

I was asked a question about my post on getting stuff done and then noticed a relevant but unrelated tweet by Tom Loosemore

.... so I thought I'd better answer. Does the map represent your target operating model? No, but not for the reason you might think.

My original post discussed more the process by which you get something started, a key part of which is creating the map to determine, communicate and then challenge what is to be done. One this has been done the map gives you an initial target (see figure 1)

Figure 1 - Initial Target

From the map you have a set of user needs, numerous components to meet those needs (a chain of needs or value chain), a necessity to use multiple methods (due to changing characteristics), a division into small teams, use of components from others, delivery of components to others and a set of strategic plays ... read the original post for details.

Now, a few things are worth noting.

1) The map is imperfect (all maps are). So other components and needs are going to emerge on route.

2) The map is not static. All components evolve due to competition (supply and demand) and hence their characteristics, how you might treat them will change.

3) Others will develop systems which impact your map i.e. provide opportunities for either re-use of their components or supply of your components to them.

The map is a guide not a target operating model because the map is not static. It will change and you will have to adapt as it changes. Maps of a competitive environment are fluid - evolution doesn't stop because you're building a big project. This actually can be extremely useful because you can use timing to assist you. In very long projects, you can de-prioritise selected components for as long as possible to give them the maximum time to evolve to more industrialised (e.g. commodity) forms.

I'm not a fan of the target operating model concept because it implies that a static, steady state can be achieved. That's a rare exception in my view and whenever I have been able to reach it, I've generally found that life has moved on and I need to change again.

Almost run out of things to solder together now by Goatchurch

After spending a few days with all my bits and break-out boards in a bowl and stirring them around aimlessly, I got all the major SPI components lined up on a breadboard, like so:

That’s an SD card writer, an OLED screen display, a bluetooth low energy and a GPS module.

The additional devices are on short 4-wire phone leads in plastic printed boxes of dubious design.

After a great deal of unplanned soldering and the use of header sockets so that none of the bits are permanently stuck in the wrong place, I’ve got a thing that looks like this:


There are issues. The barometer has a separate power supply and now doesn’t communicate, the wind-meter has a degree of noise in its signal, the I2C accelerometer is too complicated, the dallas temperature sensors can only be read one at a time, and all three SPI devices are incompatible with one another.

In particular the SD card and the bluetooth definitely don’t work at the same time. This was a matter raised by someone last November here, and the superhero inventor of the Teensy (the excellent green boarded microcontroller running the whole show) went to work and came up with a fix that involves five updates to other people’s open source libraries.

The turn-around time and technical completion regarding an integration bug of this nature is about a thousand times more effective than what seems possible within the closed-source corporate software world, where a process like this would involve hundreds of managerial hours and meetings and before they could even imagine getting it done. Nobody in the midst of it wonders why it’s such a waste of time. The fact is with an integration bug like this, the problem-solving process requires easy access to the code of both modules simultaneously. This is usually possible in open-source land, and usually impossible in proprietary-code land. It’s like fixing a fence; it’s much easier if you can access both sides of the fence, even though you are supposedly stepping on someone else’s property.

I haven’t tried the fix, and I realize that if I did I would simply be delaying myself. I’ve spend so long building this gadget that I’ve got to start using it. The fear is that all the ideas I’ve been putting this together for (and boring people about) will turn out to be crap. There is a reluctance to pitch in and get started on any in particular.

I do know that a lot of sensor projects get exactly to this point where they can collect and log the measurements into some database somewhere, and then they stall completely. Happens all the time. I can feel the potential for abandonment in me at this point. This propensity needs a name, for it is as real and as tragic as the well-known Not Invented Here syndrome — which I suffer from in spades. That’s my excuse for wasting two days writing an incomplete parser for the GPS module. Oh well. It’s done now.

The only way to fight it is get started on one of the applications. But which one? Oh I don’t know yet. Can’t make up my mind.

BIG and BOT Policy Proposals (Transcript) by Albert Wenger

Here is the transcript of my talk about a Basic Income Guarantee and the Right to be Represented by a BOT from TEDxNewYork.

All of our economic theory, all of our business practice, all of our public policy, they were all developed during an era of scarcity. Scarcity means that when you want more of something, there’s an additional cost to be paid. And that was always true for physical products but it’s no longer true for digital information. That extra view of the funny cat video on YouTube, it basically cost Google nothing. And it is that zero marginal cost of digital information that is turning everything upside down. It used to be, for instance, that we would select first then edit and then publish. Now we can publish many, many things, select a few and edit those. It used to be that people had to go to investors, raise money, then make a product and hope people would buy it. Now you can present your vision for your product to thousands of people, have them contribute, use that money to make a product that you know people want. It used to be that people were very guarded about how something worked. Now we have open source software, hardware, and even biotech. These are all inversions. They’re things that are being turned upside down. They’re being turned upside down because of the zero [ph] marginal cost of digital information and we’re just at the beginning of these. Traditional big publishers still dominate music, movies, crowd funding is tiny, open first biotech is in its infancy. But if you take these trends and you kind of extrapolate them out a little bit, what you get is a kind of digital abundance, a world in which we can learn anything we want to online for free. A world in which all the world’s medical knowhow is available to anybody anywhere in the world for free. Where you can listen to music, enjoy art, read books online for free. And we can even see how eventually that digital abundance could help us reduce the amount of physical scarcity. How? For instance by 3-D printing only products that people actually want. Also by taking existing things like cars, buildings, lab equipment and sharing it much more efficiently than we’ve ever been able to do.

So this is the world that I am very, very excited about but we’re not going to get to this world simply through more technology. We’re not going to get to this world simply through some businesses doing innovative things online. We’re also going to need to invert our public policies. And I’m going to want to speak about two examples of such public policy inversions today.

The first is a basic income guarantee or B.I.G. and the second is the right to be represented by a bot. I’ll explain what those two are and as I talk about them you may think, wow, these are crazy far out ideas. And the goal here isn’t to say, “Hey Congress, we need these as national laws in the U.S. tomorrow.” The goal is simply to say these are interesting ideas that we should be discussing. And more importantly, we should be experimenting with them to see whether they have merit. Let me start with the basic income guarantee. It’s quite a simple idea, it’s the idea that the government should pay everybody above a certain age, say sixteen, some amount of money every month or week. It’s called basic because it’s supposed your cover your basic needs: food, clothing, shelter. And it’s called guaranteed because it’s supposed to be paid to you no matter what; no matter your gender, no matter your marital status, no matter your wealth. And most importantly no matter what you do. So, whether or not you work. And that’s the inversion in this idea. The inversion is that it used to be that you had to work first in order to get paid. Under basic income guarantee, you get paid first and then you choose what to work on. It doesn’t do away with the labor market at all. You can still work in a job where you get paid more. It simply puts a floor under everybody’s income. Now you might say, why would we want that? Well because it would let us embrace automation instead of be afraid of it. I’ve been around computers for thirty plus years. And for many decades, we’ve had these promises of artificial intelligence and they have been false promises. But we now actually have major breakthroughs and we have machines that can do many of the things that humans currently do for work and as a source of income. Let me give you two examples. There are about four million people in the U.S. who make a living driving a truck, a taxi, or a bus. But we also know we have self-driving cars now. So it’s not a question of if anymore, it’s just a question of when, some of these jobs will be replaced by machines. On the other hand, we have about a million people working in the legal professions. But we now have machines that can very efficiently read through reams and reams of legal documents and even write some of them. Again it’s not a question of if anymore, it’s just a question of when. Now you might say, why do we want to embrace automation? And the answer is because it gives us time and time is great in the world of digital abundance. It’s the time you have to watch TED videos. It’s the time you have to make TED videos. So in a world of digital abundance, we want people to have time. We want people to feel they have the time and the resources to learn new things. We want people to have the time and resources to contribute to those things. And make them free. And basic income guarantee, B.I.G., helps with that in a second way. It helps with that because it creates a much broader base of people who can participate in crowd funding. So instead of saying we need these paywalls around content that keep people out, we can say no, let’s put out free content and then let’s fund it on Kickstarter, Indigogo, Patrion, or for science or Beaconreader for journalists. There are many other benefits about basic income that I won’t have a chance to really go into detail. For instance, they deal much better with situations of abuse; whether you have an abusive employer or an abusive partner. Basic income gives you a walk away option from many different situations.

There are lots of objections too and before I talk about some of the objections, let me point out one more thing, which is this is not a traditionally left or right political idea. It’s not a traditionally left idea because it says, in order to finance this you have to do away with programs like food stamps, means tested programs, and you have to believe in individual agency. It’s also not a traditionally right idea because it whole heartedly embraces free distribution. It says, let’s tax people who make a lot of money. Let’s tax corporations and then let’s give this basic income to everyone. And as a V.C., I kind of like the fact that a lot of the political establishment is sort of ignoring or dismissing this idea. Because what we see in start-ups is that the most powerful innovative ideas are the ones that are truly dismissed by the incumbents.

So what are some of the objections? The first objection that people have is they say we simply can’t afford this. And some of the proposals that people have floated, like paying people several thousand dollars every month, in fact, do add up to more than the federal and state households [ph] combined. And we do still want some things from the state, right? We want federal defense, we want roads, we want water, we want broadband. So there are some things that we want the government to do. So the interesting thing though, is that I believe it will take much less money. And it will take much less money because we need to take a dynamic view. We don’t need to ask how much money do you need today? We need to ask, how much money will you need in this world of advanced technology and of basic income guarantee? When we ask that question, what we see, is that we already live in a world of technological deflation. And that basic income guarantee will accelerate that. So, roughly since the mid-1990s, in the U.S., the cost of consumer durables has already been declining. The things that have been getting more expensive are primarily services and within that, primarily education and healthcare. Now basic income guarantee will actually help reduce both of those. How? First, because it lets more people contribute to online free education materials, to online free healthcare resources. And it also frees people up to teach and to take care of people. The second objection that has been raised, is that people would simply take this money and spend it on drugs and alcohol. Now there’s no country in the world that has this so we can’t simply point at another country. But people have done studies, as far back as the seventies and there are on-going studies today about cash transfers. About cash transfers that are not means tested. And the World Bank has just published a review of these studies and what they found in looking at nineteen of them, is that there’s simply no evidence that people wind up spending this money on drugs and alcohol, any more than they would use drugs and alcohol already. The third objection that’s often raised is people are lazy, people are going to stop working. Again, the good news is, these studies show that this so-called income effect is quite small. People like to do things. People like to do interesting things. And when people were working less in traditional jobs, in these studies, what they were largely doing was spending more time with family and friends, more time teaching their children, more time taking care of their parents. So the very two things I said we want more of in order to reduce the cost of medication and the cost of healthcare. It also turns out that people working less is not a bad thing overall. If you reduce the supply of labor, you lift wages for everyone, which is also interesting, you might say, well why not just raise wages? Why not just have a higher minimum wage? And I’m very sympathetic to that idea, it’ll definitely help people who currently have a job. But it doesn’t help at all with this concept of digital abundance. It doesn’t help people create free online resources. It doesn’t give you the time to learn something from these free online resources because you have to have this job to just cover your basic needs. Same goes for just trying to reduce the work hours. So it’s only the basic income guarantee that addresses this digital abundance and basically frees us to participate in it.

The second inverted idea I want to talk about, is the idea of the right to be represented by a bot. A bot is a piece of software that acts on your behalf. Let me make this more concrete, I went on Facebook the other day because I remembered that a couple of years ago I had written something witty on somebody else’s wall. I was trying to remember who it was. Turns out that Facebook makes this quite difficult. You can’t actually search your own wall posts. Now Facebook has all this data but for whatever reason, they’ve decided not to make it easily searchable. I’m not suggesting anything nefarious, it’s just how it is. So now imagine for a moment if in my relationship to Facebook I was able to use a piece of software. I could now instruct this piece of software to go through the very cumbersome steps that Facebook lays out for finding past wall posts and do it on my behalf. That will be one thing. The other thing I could have done is if I’d been using this bot all along, the bot could have kept my own archive of wall posts in my own data store and I could simply instruct it to search my own archive. Now you may say, well that’s a trivial example. But actually, it’s very foundational. It completely inverts the power relationship between networks and their participants. It also inverts the present legal situation. There are lots of laws at the moment that allow networks to restrict to what degree you can use a bot to interact with them. They basically can restrict you to only use the existing application programming interfaces or the APIs and say only these are legitimate and on top of that, we can limit how much you can do.

Now to see that this is a very powerful inversion, I want to talk for a moment about on-demand car services. Companies like Uber, Lift, and Sidecar. If you’re a driver today, they each have a separate app. Makes it very hard for you as a driver to participate in more than one network at a time. If you had the right to be represented by a bot, somebody could write a piece of software that drivers could run, that would allow them to simultaneously participate in all of these marketplaces. And the drivers could then set their own criteria for which rights they want to accept. Now clearly those criteria would include for instance what the commission rate is that the marketplace is charging. And drivers would go for marketplaces that charge less of a commission. So you can see in this example, how the right to be represented by a bot is quite powerful. It would make it very hard for an Uber or a Lift or any one of these companies to charge too high a commission because new networks could come up. A cooperatively owned network, cooperatively owned by the drivers, for instance, and the drivers could participate simultaneously in the new network and the old network. And it’s the very threat of the creation of these new networks that would substantially reduce the power of the existing networks. This is important, not just for drivers. We are all freelance workers on Facebook and on Twitter and on all these big social networks. Yes we in part we get paid through free services, free image storage, free communication tools. But we’re also creating value. And it’s not just the distribution of value that we’re worried about, we’re also worried about what do these companies do? We’re worried about questions such as censorship. We’re worried about questions such as, are we being manipulated by what’s being shown to us in the feed? And at the moment, what regulators are doing is they’re trying to come up with ad-hoc regulations to regulate each and every one of these aspects. And many of these ad-hoc regulations, are going to have completely unintended consequences. And often these consequences will be bad. Let me just give you one example. The European Union has said, if you want to have information on people who live in the European Union, you have to keep it on European Union servers. That actually makes it harder for new networks to get started, not easier. It actually cements the role of the existing networks instead of saying we need to create opportunities for competition with existing networks.

So I’ve presented two ideas for fundamental inversions in public policy. And you might have a couple of objections to those. The first one could be let’s just not have any new policy at all, let’s just have more technology. We don’t need any policy. And that objection is often motivated by the idea that whatever policy that will come will actually make innovation harder, not better. It will make it harder to get to that state of digital abundance that I was talking about. And if you look at history of innovation, if you take the car for example, that’s always true initially, right? So, the early cars were actually steam engines that people were running on the street. And the regulators said, “Woah, that’s dangerous, you have to have somebody with a red flag walking in front of it.” And then the cars get a little better. They said, “Oh yep, still dangerous, can’t go faster than a horse drawn carriage.” So the early regulation was aimed at slowing down innovation. But ultimately, we only got the benefit of individual mobility because we embraced public policy, because public policy said, here are rules of the road. And public policy said, here are roads. We’re actually going to invest in making roads. So we do need public policy, it’s just we need the right kind of policy.

The second overall objection you might have is, these ideas are just crazy. How will we know they can work? We can’t just do this. And the good news here is, that we can run small local experiments. We could in fact right now run an experiment with a thousand people with basic income guarantee in a city like Detroit. Detroit has very cheap housing stock at the moment. There are entire buildings in downtown Detroit that are empty. So we could run that experiment and I think we should. We could run an experiment with the right to be represented by a bot in a city like New York. New York controls how Lift, Uber, Sidecar, all of these services operate. So, New York City could say, “If you want to operate here, you have to let drivers interact with their service programmatically.” And I’m pretty sure, given how big a market New York City is, these services would agree.

So, zero marginal cost of digital information gives us promise. It gives us a promise of digital abundance and ultimately, physical abundance. And whether or not we can realize that promise depends on whether we’re willing to invert our own thinking away from scarcity thinking towards abundance thinking.

Thank you.

The Age of Solar System Exploration by Astrobites

If you haven’t heard about the Rosetta mission, and the European Space Agency’s remarkable feat of landing on a comet, then you must be like it’s lander Philae: living under a rock.

What you probably also didn’t hear much about is the slew of other ways (both recent past and near future) we are exploring up-close-and-personal the more unusual parts of the Solar System. The tiny stuff; the names you didn’t memorize in grade school. That one that doesn’t get to play with the “big boys” anymore. The years of 2014 and 2015 may well be known as the time when our exploration of the solar system truly took off, as we explored asteroids, comets, and minor planets.

Here’s a look back at what we’ve accomplished in the last year, and what we’re about to achieve in the year to come.

Scroll to the bottom to see an abbreviated list of important upcoming events for these missions.


The most recent image of Comet 67P/C-G, taken on January 16th by the orbiting Rosetta spacecraft. Rosetta and its lander Philae (the first objects to orbit and land on a comet) will follow the comet through its orbit to closest solar approach in August 2015. Image c/o ESA

ESA’s Rosetta Mission Lands on a Comet: August 2014 – December 2015

One of the biggest science news pieces of the year, the European Space Agency’s Rosetta spacecraft reached Comet 67P/C-G (right). It began an orbit on August 6th, 2014, after a journey of more than 10 years and 6.4 billion kilometers. On November 12th, the spacecraft’s landing probe Philae became mankind’s first object to land on a comet. Unfortunately, a malfunction in the landing system resulted in Philae bouncing a kilometer off the surface, coming eventually to rest in the shadow of a cliff.

Unable to get adequate sunlight to charge its batteries, Philae quickly went into hibernation mode. Before shutting down it was able to return measurements of gaseous water vapor, but was unable to drill into the surface to measure the content of the solid ice.

Many models suggest that the high water content of Earth may have come from collisions with comets or asteroids during the late stages of the Earth’s formation. Most water is made of ordinary hydrogen and oxygen, but a tiny fraction contains a deuterium atom (a hydrogen isotope made of a proton and a neutron) in hydrogen’s place. One of the main scientific goals of pursuing comets is to identify the source of Earth’s water. The key to accomplishing this is to see if the abundance of deuterium (see this Astrobite) in a comet’s water matches the levels found on Earth. Philae’s water vapor measurements indicate a deuterium abundance more than 3 times higher than on Earth. Perhaps this suggests asteroids are more responsible for Earth’s water supply than comets.

The Rosetta team hopes to be able to confirm this result, and perhaps obtain an ice sample with Philae’s drill, if the lander wakes up. Most of the team has little doubt the lander will resume function in the coming months. But since the orbiting Rosetta still has yet to pinpoint Philae’s final landing spot, the question of when the probe will be able to get the 5 to 7 more watts of energy it needs is a tough question to answer. The comet (and accompanying spacecraft) is approaching the Sun and will reach its closest point in August 2015. The team hopes the change in scenery may bring more sunlight to Philae’s solar cells.

In the meantime, Rosetta will make a close approach orbit of the comet in February 2015 — snapping photos which should resolve details down to a few inches — and is planning to soar through an outgassing jet in July, when the comet’s tail begins to form. The Rosetta mission is scheduled to end in December 2015, although large public support for the mission may help researchers extend its lifetime into 2016.


NASA’s spacecraft Dawn captures images of Ceres: our nearest dwarf planet neighbor and the largest asteroid in the asteroid belt. Dawn will enter an orbit around Ceres beginning in March 2015. Image c/o: NASA/JPL

NASA’s Dawn Mission Orbits Asteroid Ceres: March 2015 – July 2015

A few years ago, NASA’s Dawn spacecraft spent about 12 months orbiting Vesta, one of the largest asteroids in the asteroid belt. For more details about Dawn’s encounter with Vesta, see this past Astrobite.

After leaving the orbit around Vesta, Dawn has spent two and a half years traveling across the asteroid belt to catch up to Ceres, the largest known asteroid. On its own, Ceres makes up about 30% of the mass of the entire asteroid belt. It’s so massive, its gravity is strong enough to shape it into a rough sphere, so Ceres is also identified as a dwarf planet.

This week, NASA released the latest images of Ceres (left), taken as Dawn approaches the asteroid. In just a few weeks, on March 6th 2015, Dawn will enter orbit around Ceres. Asteroids and comets are pieces of the debris left over from the formation of the solar system planets. NASA wants to understand Ceres’ formation, its material makeup, and why it didn’t grow any larger. This information will help distinguish between theories that describe how planets formed in our solar system.

As has previously been discussed on Astrobites, long distance observations indicate the presence of water on the dwarf planet. Just like comets, asteroids may be responsible for delivering the Earth’s water supply, and the Dawn team hopes to improve upon these measurements. Dawn’s main science mission continues until July 2015, after which it will be shut off and remain in orbit around Ceres for a very long time.


An artist’s illustration of NASA’s New Horizons spacecraft, which will pass by Pluto in July 2015. The spacecraft will not be able to maintain an orbit around the tiny dwarf planet, but will instead fly farther out into the Kuiper Belt. Image c/o: NASA/JPL

NASA’s New Horizons Flies by Pluto into the Kuiper Belt: July 14 2015

Launched in 2006, NASA’s New Horizons has been navigating space for over 9 years, and has perhaps the most exciting itinerary of all the spacecrafts on this list. To save on fuel, New Horizons executed a gravity assist (or slingshot) maneuver around Jupiter in February 2007. Some beautiful photos of Jupiter resulted as an added benefit of this layover.

New Horizons has been in frequent phases of hibernation since it’s encounter with Jupiter, and is now making its approach to Pluto: probably the most popular dwarf planet. On July 14th 2015, New Horizons will make its closest approach within 10,000 kilometers of Pluto. The spacecraft won’t be stopping at Pluto, either, but will continue into the Kuiper Belt to investigate objects astronomers can barely see from Earth. To learn more about New Horizons, and its path after Pluto, see this Astrobite.

The Future of Solar System Science

With Rosetta, Dawn, and New Horizons continuing to gather information, the future looks bright for humanity’s goal of understanding our solar system. Asteroids, comets, dwarf planets, and Kuiper Belt Objects hold many clues for how the planets — including our own — formed from the initial ingredients around the Sun. In the next decade, NASA hopes to complete a mission to capture an asteroid and bring it into orbit around the Moon. This would be a remarkable opportunity to study the remnants of the solar system’s formation.

For the present, here is a timeline of the most important events coming in the future of space exploration:

February 2015: Rosetta makes close approach to Comet 67P/C-G, resolving features as small as several inches.
March 6 2015: Dawn enters orbit around Ceres
Spring/Summer 2015: Rosetta’s lander Philae (hopefully) wakes up and takes new samples from the surface
July 14 2015: New Horizons makes closest ever approach of Pluto, on its way into the Kuiper Belt
July 2015: Rosetta scheduled to make pass through Comet’s outgassing jet
July 2015: Scheduled end of Dawn science mission
August 2015: Comet 67P/C-G closest approach of Sun: Rosetta observes comet activity and tail
December 2015: Scheduled end of Rosetta science mission
January 2019: New Horizons makes possible pass-by of Kuiper Belt Object 1110113Y

Addressing some common questions about Comet Lovejoy by The Planetary Society

Lowell Observatory's Matthew Knight addresses several points of confusion that have repeatedly come up in the coverage of Comet Lovejoy.

Field Report from Mars: Sol 3902 - January 15, 2015 by The Planetary Society

Larry Crumpler gives an update on the status of Opportunity's traverse toward Marathon Valley.

Fountains of Water Vapor and Ice by The Planetary Society

Deepak Dhingra shares some of the latest research on Enceladus' geysers presented at the American Geophysical Union (AGU) Fall Meeting in San Francisco last month.

January 23, 2015

How do you cope? by Feeling Listless

Life Enough, enough now.

Well, no, actually there's never enough. I doesn't end ever. It just keeps going and going and going and, you get the idea, and it doesn't end and at this point will never end.

It's all become a bit overwhelming now and I don't know what to do about it.

I refer of course to everything, the everything in this case being everything. On the internet.

After years of dial-up and mobile internet with all of their relative limitations, last year, as you know, we finally received unlimited broadband internet and after years of dial-up and mobile internet with all of their relative limitations and knowing that I would, I've fallen all in my addictive personality pouring through the cracks. The devices. The services.

It's overwhelming and I've now reached the stage where I don't know what to do about it.

My Twitter legend says that I'm "Intensely interested in everything" and that's the problem. Apart from some sport, I really am. Always have been but it's now become especially acute.

Now that personality trait has come up against everything available. (Almost) everything. Whole avenues for that interest to go to. All of those films. All of that music. All of the books. Plus the infinite, infinite media. Articles, articles, articles.

That's just what's available to me right now across the various tabs I have open in Chrome, across Twitter and my RSS reader (which also happens to be gmail).

Then there are the backlogs, the hundreds of videos playlisted in Youtube, the links stored in Pocket or favourited in Twitter. Oh and whatever's been sent to Kindle. Plus all of those are part of another channel or website filled with other presumably equally interesting articles and in my mind I can see networks and trees of just stuff which all seems like it could be interesting, entertaining or educational.

All of which ignores everything in the real world I have sitting around the house waiting to be read/watched/listened to. Or broadcast on television. Or on the radio. Then available on catch-up.

I finally downloaded the Comixology app the other day. Just the free comics would take a solid week to read.

Here's another example. At the moment I've started watching my way through Alias again after having bought the boxed set in an HMV sale two years ago. I'm really enjoying it. But eight episodes in and I find myself wondering if I should be watching American Horror Story instead. Or House. Or ...

As you can imagine The Internet Archive is my intellectual and emotional Death Star.

So I have to ask...

How do you cope?

Plus on top of all this how do you find time to watch linear television? RTD's many series Cucumber, Banana and Tofu began this week. Two solid hours of programming a week for however many weeks. The reviews and word of mouth have been very good. But then I look at the pile of blu-rays I have unwatched, all of the content on the Netflix, the iPlayer, all the videos I've suggested I might "watch later" on Youtube and, well, everything else and I think do I want to? Do I need to?

Again I ask...

How do you cope?

My guess is I need to limit myself.  Try going back to my core interests.  But even core interests like Elizabethan theatre or film are endless, near infinite avenues to be pursued.  Plus its called limiting for a reason.  This Emily Gould piece about writing her first novel looks interesting, just as everything on Medium tends to look interesting.  But it's also really, really long, at least a half hour commitment.  Wouldn't that half hour be just as productively spent starting to read the introduction to the Oxford edition of Paradise Lost I was given at Christmas but is unread so far, or yesterday's long read in in The Guardian about Yakov Smirnoff or any one of the hundreds of pieces which sit unread in Pocket?

How do you cope?

I know what I potentially should do.  Scorch earth.  Delete all the bookmarks, the saved until laters, begin with a clean slate.  If I haven't read or watched or listened to something yet, I will never, it doesn't matter if I haven't seen that interview with DP/30 interview with costume designer Sandy Powell about The Young Victoria.  Or any of the many hundreds of interviews in the DP/30 channel.  But I want to.  I really, really want to.

How do you cope?

Please do tell.

"No studio would make a movie like this one at all" by Feeling Listless

Film Focus is key to the most subtly powerful moment in All The President’s Men:

"All The President’s Men tells the story of the Watergate investigation in ways that scarcely seem fathomable from today’s perspective. Most notably, the movie runs nearly two and a half hours, yet ends before Bob Woodward (Robert Redford) and Carl Bernstein (Dustin Hoffman) crack the case—in fact, it concludes on a note of defeat, immediately after they make a huge mistake that sets back their efforts enormously. No studio nowadays would even consider trusting the audience to know that a movie’s heroes will be vindicated at a later date. (No studio would make a movie like this one at all, arguably, but that’s a separate issue. The closest contemporary equivalent is Zodiac, and that’s at least punctuated by murders.)"

Simulating X-ray Binary Winds by Astrobites

  • Title: Stellar wind in state transitions of high-mass X-ray binaries
  • Authors: J. Čechura and P. Hadrava
  • First Author’s Institution: Astronomical Institute, Academy of Sciences, Czech Republic
  • Paper Status: Accepted for publication in Astronomy & Astrophysics


    A 3D surface model of X-ray binary Cygnus X-1. Contours and lines represent regions of equal density. Fig. 10 from the paper.

How do you simulate a massive star’s behavior when its closest neighbor is a black hole? Astronomers routinely make simplifying assumptions to understand how stars behave. If there are thousands of stars orbiting one another, treat them as point masses. If there is a single, solitary star, treat it as a perfectly symmetrical sphere. But just like massless pendulums and frictionless pulleys, these ideal scenarios aren’t reality. Sometimes, to truly understand stars, you need to roll up your sleeves and start thinking about pesky details—things like three dimensions, X-ray photoionization, and the Coriolis force.

Windy with a chance of X-rays

In today’s paper, Čechura and Hadrava examine what happens to the runaway gas from the surface of massive stars—the stellar wind. In particular, they look at systems with massive stars so close to a companion neutron star or black hole that the stellar wind is jarred into a new orbit and heated to the point of emitting X-rays. This is a high-mass X-ray binary.

The authors begin with a 2D model to understand how the stellar wind behaves differently when one star is more or less massive than the other, or when the wind itself is programmed into the model in subtly different ways. As it turns out, emitting tons of X-rays is more than the end result of stellar wind particles slamming into an accretion disk. Those X-rays continue the story by ionizing nearby gas and slowing down the incoming wind. When the wind slows, the overall shape of the system changes thanks to gravity and the Coriolis force, which in turn affects how many X-rays are emitted!

Cygnus X-1’s split personality

With these variables better understood, the authors create a full-fledged 3D hydrodynamic model of a high-mass X-ray binary. A 3D model returns more accurate densities and velocities than a 2D model because the geometry is more realistic. They base this simulation on the well-studied X-ray binary Cygnus X-1, which is generally observed in one of two states: either it is emitting relatively few X-rays of high energy (low/hard), or it is emitting many X-rays of low energy (high/soft). In the low/hard state, wind from the massive star is actively flowing into an accretion disk around the companion. The high/soft state takes over when that flow is disrupted.

To simulate the transition from Cygnus X-1’s low/hard state to its high/soft state, the authors suddenly increase the X-ray luminosity of the compact companion. As a result, gas in the stellar wind never makes it to the accretion disk because it is bombarded with X-rays. It turns out that this X-ray photoionization process is even more important than the simpler 2D model suggested.


3D cross-section of Cygnus X-1’s stellar wind in the low/hard X-ray state, when material is flowing into the compact companion’s accretion disk. Each column represents a 90-degree change in viewing angle. From top to bottom, the rows show particle density, velocity magnitude, and degree of ionization. The black region in the ionization panels is an X-ray shadow, where no particles are photoionized. Fig. 7 from the paper.

high/soft X-ray state

3D cross-section of Cygnus X-1’s stellar wind in the high/soft X-ray state, when material from the stellar wind is not flowing into the compact companion’s accretion disk. As in the previous figure, the columns show three mutually perpendicular viewing angles and the rows show different physical parameters (density, velocity magnitude, and degree of ionization). Fig. 8 from the paper.

Of course, even this detailed 3D model isn’t perfect. In the future, the authors would like to more accurately consider radiative transfer as well as account for turbulence in the stellar wind. And Cygnus X-1 is a single test case! Still, this is a huge step forward from point masses, perfect spheres, or even a 2D simulation. Half the challenge in simulating reality is choosing which assumptions are reasonable tradeoffs to construct a useful model, and this paper illustrates just how important X-rays are in determining the behavior of an X-ray binary.

January 22, 2015

Scooped. by Feeling Listless

Film Nine years since its original release, eight years since I imported a copy from Sweden and reviewed it on the blog, five years since I watched it as part of my Woody Allen project and about a month or so since it saw it as part of #garaiwatch, Woody Allen's film Scoop is finally enjoying a UK dvd release on the 9th February (as spotted by Vodzilla).

Amazon have it for £10.25 which seems a bit steep now for the customary vanilla edition though it's a minor miracle that it even exists at all since it didn't receive a theatrical release (no matter what the IMDb says) and only seen two TV broadcasts late at night on the BBC.  An imported edition on blu-ray is also available for the same price.  I think it's Italian.

Any good?  I'll refer you to the reviews above.  It's not vintage Woody and it's certainly not Midnight in Paris, but for my money its the best of his British films and certainly worth seeing to spot all the cameos as high end character actors turning up for one line or as glorified background artists just so they can say they've been in a Woody Allen film.

DLD 2015: It's Just the Beginning by Albert Wenger

I just returned from DLD. I will write a post with some more thoughts when I have less catching up to do, but in the meantime here is the video of my talk

And here are the slides (this time without a line spacing problem)

And finally, if you speak German you can also hear me talk about the same topic.

Interested in Exoplanets? Astrobiology? Apply now for ERES & AbGradCon2015 by Astrobites

So many conferences, so little time! I’d like to talk about TWO upcoming conferences designed specifically for undergraduate, graduate students and early career scientists in the field of astrobiology and exoplanet science: 1) the Astrobiology Graduate Conference and 2) the Emerging Researchers in Exoplanet Science Symposium.

The importance of conferences could not be stressed enough in the field of astronomy. Up-and-coming students should always be looking for opportunities to meet other scientists, build collaborations, give talks, present posters, etc. This requires a suite of skills that are not exactly taught to any of us (unless you have an awesome advisor or are at one of the few universities who offer career based seminars). So where does an emerging researcher acquire these skills? This is where student led conferences become invaluable.

Imagine a conference where only your peers sit before you. Where the pressure is off and networking and collaboration is stress-free. Now couple that with a conference that actually provides (limited) travel funds and you’ve got ERES and AbGradCon. Conferences planned with the student in mind.

Below, I will give short synapses of both conferences and readers should bear in mind that applications are now open. So apply soon!

Emerging Researchers in Exoplanet Science Symposium

Screen Shot 2015-01-20 at 10.06.03 PM


ERES is a brand new symposium designed for students and postdocs working in the realm of exoplanet science and related disciplines (e.g., brown dwarfs, protoplanetary disks, star formation, related instrumentation and theory). This conference is unique in that not only does it allow students to share their research through talks, poster pops, poster sessions, there will also be a unique set of seminars and discussion on topics such as work/life balance, applying for fellowships/grants and various career opportunities. The field of exoplanet science is booming and therefore, it is crucial for the next generation of scientists to come together and form a community. ERES will be the platform to build such a community.


Astrobiology Graduate Conference











Astrobiology is a crazy realm of science since it incorporates all of the sciences. Which means, if you are interested in astrobiology, you might find yourself teeter-tottering on the edge of biology/astronomy, chemistry/geology, physics/meteorology. If this is you, you belong at AbGradCon. AbGradCon provides a unique setting where students from all disciplines can gather in one location to discuss current research in astrobiology. It also allows students to increase their exposure to topics they wouldn’t normally encounter, all in the name of being truly interdisciplinary. If that isn’t enough, here are some awesome insights into what will be going on:

  1. Two days of scientific and collaborative sessions
  2. Astrobiology trivia at local venues
  3. Science panels that are open to public of Madison
  4. Science communication workshop
  5. Undergraduate student poster competition
  6. Fieldtrip to Devil’s Lake State Park

Not only does AbGradCon provide a baseboard for expanding your own scientific horizons, it also allows for students to get involved expanding public involvement within the astrobiology community.

Applications for both these sessions are now open. For more information, please visit the links provided above.

Curiosity update, sols 814-863: Pahrump Hills Walkabout, part 2 by The Planetary Society

Curiosity has spent the last two months completing a second circuit of the Pahrump Hills field site, gathering APXS and MAHLI data. The work has been hampered by the loss of the ChemCam focusing laser, but the team is developing a workaround. Over the holidays, the rover downlinked many Gigabits of image data. The rover is now preparing for a drilling campaign.

The President's 2016 Budget Is Coming by The Planetary Society

The 2016 budget cycle for NASA kicks off on Feb 2nd, when the White House releases the President's Budget Request. Here's what to look for.

January 21, 2015

My Favourite Film of 2012. by Feeling Listless

Film By 2012, I'd pretty much said goodbye to visiting the cinema on a regular basis.

Too costly, too many idiots in the audience, trusting the cinema with decent projection, and in the case of Picturehouse at FACT, toilets on a different floor to the screens leading to a mad, dangerous rush down concrete stairs if caught short in the middle of a film.

There were and still are exceptions and mainly these have to do with spoilers.

As we discussed the other day, I'm about as spoilerphobic as it's possible to be but luckily most films whose narratives aren't scaffolded by twists.

But some are and it's these films which tend to lead me back to the cinema.

Most of the time now that's MARVEL releases. Wasn't always like this. I watched the whole of phase one on blu-ray via Lovefilm. But SHIELD's narrative enmashing with the theatrical releases has meant I've needed to see them so as not to spoil the television series or vis-versa. Lord knows what happened for people who didn't see Captain America: The First Avenger before the television version.

The other kind are those which people are talking about a lot.

Cabin In The Woods was one of those.

Despite having sat on a shelf for a few years waiting for release, it was pretty clearly, pretty early that everything in the design of the pre-publicity was about making the sure the audience thought they were going to see one kind of film but end up with another.

Arguably it's the first time the poster was itself a spoiler, though cleverly you wouldn't realise just how much until you'd seen the film.

Which means I ended up in screen three of Picturehouse at FACT on the day it opened and was indeed caught short in the middle of the film so had to make the mad, dangerous rush down concrete stairs to the toilets on a different floor.

The experience was, well, average. Sitting on the front row because what used to be my usual seat was taken, I had to deal with that screen's acute problem of having the illumination from the Emergency Exit sign throwing itself across the bottom half of the right hand side of the screen ruining the blacks. Perhaps this has been fixed since.

Cabin in the Woods is Cabin in the Woods. If you've seen it you'll nod along sagely and agree that the once proposed sequel should never be spoken of again.

If you haven't I'm not about to spoil it for you, especially if you've managed to get this far.

That's why the title bar selection is as bland as bland could be.

Now it's time for you to start taking bets on 2011's choice...

Are There Age Spreads in Intermediate Age Clusters? by Astrobites

Authors: N. Bastian and F. Niederhofer

First Author’s Institution: Astrophysics Research Institute, Liverpool John Moores University

Status: Accepted by MNRAS


Can star clusters be host to multiple events of star formation? Data from Hubble Space Telescope seems to suggest that this is the case.  While we often assume that all the stars in a stellar cluster have the same age, astronomers have recently found that a majority of intermediate age clusters in the Large and Small Magellanic Clouds display evidence of a spread in the age of their constituent stars. To see where this comes from, let’s take a step back to look at some properties of stars.

Figure 1 from the paper: a color-magnitude diagram of NGC 1806, showing the sub-giant stars (in red) clustering around the isochrone consistent with 1.41 Gyr. The narrowness of the sub-giant branch suggests that the stars do not have an age spread.

Figure 1 from the paper: a color-magnitude diagram of NGC 1806, showing the sub-giant stars (in red) clustering around the isochrone consistent with 1.41 Gyr. The narrowness of the sub-giant branch suggests that the stars do not have an age spread.

Stars aren’t born with just any luminosity or color. If you were to plot stellar color against luminosity (what is known as a Hertzsprung-Russell Diagram or a color-magnitude diagram), you’d find a band where the newly-formed stars tend to lie, known as the main sequence. As the stars progress through their lives, they eventually evolve away from the main sequence, with the most massive stars spending the least amount of time on the main sequence and the least massive stars spending the most time on the main sequence. The point where the star leaves the main sequence is known as the main sequence turnoff. While it’s very difficult to measure the age of any single star, we can often use the main sequence turnoff to get an estimate of a stellar cluster‘s age.

How does this work? In general, stars are also not formed in isolation, something that’s been mentioned previously on astrobites. They are usually formed in groups, from massive clouds of molecular gas often hundreds of solar masses in size. As a result, we tend to assume that all the stars in a stellar cluster are roughly the same age (what’s known as a simple stellar population or SSP) and distance from us. We can then estimate the age of the stellar cluster by looking at the age of the stars at the main sequence turnoff; if all the stars in a cluster are about the same age, then the stars leaving the main sequence should also be about the same age, producing a narrow main sequence turnoff. This age would then be the age of the stellar cluster.

On the other hand, if a stellar cluster contains stars formed from several different star formation events, there would be a spread in the age of stars at the main sequence turnoff, known as an extended main sequence turnoff, or eMSTO. The presence of eMSTOs in many intermediate age clusters has caused astronomers to suggest that these clusters have been host to many star formation events. This problem was previously discussed in this astrobite, which focused on the star cluster NGC 1651.

However, when the authors of the previous astrobite paper investigated NGC 1651, they found that other features indicative of multiple generations of star formation–like a wide sub-giant branch–were not present. This led them to conclude that the extended main sequence turnoff might not be an indication of a spread in age after all. But is this true for other intermediate-age clusters?

NGC 1806 & NGC 1846

The authors of today’s paper have looked at the intermediate-age stellar clusters NGC 1806 and NGC 1846, both located in the Large Magellanic Cloud, to see if they can find an age dispersion in other parts of the color-magnitude diagram. In particular, they focus on the width of the sub-giant branch and the red clump. These clusters are estimated to have age spreads of about 200-600 Myr largely based on their MSTO regions.

They began by making color-magnitude diagrams of the stars in both clusters. The one for NGC 1806 is shown in Figure 1. The blue lines running through the diagram are isochrones, curves that represent the location of stars with the same age. The sub-giant branch stars, in red, are clustered around one isochrone, which seem to indicate that there isn’t a spread in their ages.

Figure 2: Figure 3 from the paper. The black histogram indicates the magnitude difference between the observed sub-giant stars and the synthetic sub-giant stars in their 1.44 Gyr isochrone model. The red distribution shows the expected distribution for their simple stellar population model convolved with the observational errors. As we can see, this doesn't explain the tail ends of the black histogram. On the other hand, the blue distribution, which is the synthetic stellar population model with an age spread that best fits the MSTO region, misses the core of the black histogram.

Figure 2: Figure 3 from the paper. The black histogram indicates the magnitude difference between the observed sub-giant stars and the synthetic sub-giant stars in their 1.44 Gyr isochrone model. The red distribution shows the expected distribution for their simple stellar population model convolved with the observational errors. As we can see, this doesn’t explain the tail ends of the black histogram. On the other hand, the blue distribution, which is the synthetic stellar population model with an age spread that best fits the MSTO region, misses the core of the black histogram.

The authors then created two synthetic color-magnitude diagrams, one showing the distribution if the eMSTO was caused by an age dispersion, and the other for a simple stellar population with an age of 1.44 Gyr (their best fit to the sub-giant branch). To find the difference between their observations and simulations, they subtracted the expected difference in magnitude for stars in the observed sub-giant branch 1.44 Gyr isochrone. This is shown as the black histogram in Figure 2. After estimating the observational errors, they convolve these with a distribution of stars coming from a simple stellar population (in red). Finally, these are compared with the distribution expected from stars that formed over an extended star formation history (in blue). The 1.44 Gyr isochrone is able to reproduce the peak of the histogram, but fails to catch the tail ends, something the authors acknowledge is therefore unlikely to be caused by errors in our photometry (our measurement of the flux). On the other hand, the model that used a stellar population with an age spread fails to account for the peak of the histogram, suggesting an inconsistency between the sub-giant branch and a large age spread. When they analyze the red clump, they also find the stars are clustered around one isochrone.

The authors repeat their analysis for NGC 1846 and obtain similar results for the other cluster, causing them to conclude that eMSTOs are not caused by an age spread.

Maybe Not

Their results are consistent with the assumption that stellar clusters do not exhibit a range of stellar age and are also consistent with a number of findings in the literature that support a lack of age spreads in stellar clusters. Instead the authors point towards stellar rotation or interacting binaries as two possible causes of the eMSTOs. Stellar rotation can change the structure of the star and its inclination angle to an observer, causing it to have a different effective temperature and therefore, color. However, we still don’t fully understand the effects of rotation on the sub-giant branch of the color-magnitude diagram, so it is still possible that this explanation, while consistent with the eMSTO region, could be at conflict with other parts of the color-magnitude diagram. Another possible explanation they offer is the presence of unresolved binaries; if we have unresolved binary stars, then we would be recording the flux of two stars rather than just one. These binaries are not expected to cause significant broadening in either the sub-giant branch or the red clump, but also may not fully explain the eMSTO. The authors also acknowledge and encourage alternative explanations for the eMSTOs as well.

Despite the growing evidence for an alternative explanation to age spreads being the cause of the eMSTOs we observe, it’s probably too soon for us to conclude definitively either way in this debate. At the very least, these results indicate that we still need to to study eMSTOs and their possible causes in greater detail.

The Moon, In Depth by The Planetary Society

Explore a new collection of 3D lunar landscapes.

New Dawn images of Ceres: comparable to Hubble by The Planetary Society

Dawn has captured a series of photos of a rotating Ceres whose resolution is very close to Hubble's, and they show tantalizing surface details.

Pretty Picture: Comet Lovejoy by The Planetary Society

Astrophotgrapher Adam Block shares an image of Comet Lovejoy, which is currently visible with binoculars.

January 20, 2015

The Flood (The Complete Eighth Doctor Comic Strips Volume Four). by Feeling Listless

Comics This is it then, the final selection of comics barring cameos and whatever plans US licensees Titan have for future publications. My copy of this graphic novel has the first page torn out. It was bought at the Barnardo's charity shop at the bottom of Church Road in Liverpool (near Penny Lane) for one pounds and I've always wondered how this wanton act of vandalism occurred. If you're at all interested in catching up yourself, all four are due to be reprinted I believe but they're still available at various prices on Amazon and if you're a recent convert and interested in the history of the show, look no further.  Big Finish rightly receives a lot of credit for influencing the past nearly ten years (ten years!) of stories, but its impossible not to read something like The Flood and watch the franchise being re-engineered for modern audiences in front of your eyes in comic art and speech bubbles.

Where Nobody Knows Your Name

Just the sort of story which the spin-off universe excels at because it can’t properly be justified in a television series with just thirteen episodes and a budget, this is a moment for the Doctor to stop for a moment and reflect on what it is that he does. Closing Time is the nearest equivalent perhaps, but that also shoehorned in a Cyberman threat, whereas this is just about the Doctor talking through his problems with a kindly innkeeper. At this distance, this surprise won’t resonate with everyone, but as writer Scott Grey mentions in his notes at the point when this appeared, fandom was a pretty closed shop in which all back references made sense without much coaching. Ironically the strip was published at the end of April 2003, at about the moment when announcement was about to come which would change everything.

The Nightmare Game

Being an only child and not having many interested friends, unlike the writers of this I wasn’t really exposed growing up to anything which wasn’t bought for me which generally consisted of odd issues of Spider-Man, Whizzer and Chips and whatever remained comics with the titles cut off the front covers were sold on Speke Market. Which is why until I read the notes for this I didn’t realise that there were more football comics than Roy of the Rovers so entirely missed most of the subtleties inherent in what’s being achieved (or not Hickman and Roberts are still unhappy about the third part). Still it’s good to see the franchise taking a rare retro-nostalgic visit to its own period of production pre-Cold War with a story which feels more like a precursor to Life on Mars. Plus the Doctor’s wearing a recreational fez on the first page!

The Power of Thoueris

Another one shot which feels like a pre-cursor to Doctor Who Adventures in which the Doctor already knows full well what the threat is as soon as it appears to that it can be dispatched within a few pages. It’s notable how structurally, despite the lack of pages, there’s still a need for a one-off companion, that even within the comic, the Doctor doesn’t simply use thought bubbles to convey information. But this also allows Scott Grey to stray into the show’s typical territory of portraying “god” as little more than aliens tumbling into Earth’s history at inopportune moments and he was well aware of the parallels with Pyramids of Mars even changing the name of the antagonist so as not to create inconsistencies within the mythology of the franchise (something Who doesn’t often care that much about).

The Curious Tale of Spring-Heeled Jack

Superb. Spoilers ahead so you’d best stop reading now I don’t have room for the usual textual buffer shenanigans. The companion twist got me, even in graphic novel form and is roughly what I assumed Clara would be. You could see how unsuspecting readers of the strip in 2003 might have assumed that the writers were developing a new companion for the Doctor, someone not unlike Charley, the TARDIS scene cementing the thought and I’d entirely forgotten it what with there being twelve whole years since I originally read this. Knowing the future doesn’t stop its potency either. Everything about CTofSHJ looks forward to the Paternoster Gang stories, right down to leaving a devilish defender at work in Victorian London, though that’s probably more to do with a Penny Dreadfuls and Doyle as joint sources than direct influence.

The Land of Happy Endings

During the wilderness years and beyond, plenty of ink was expended trying to rationalise the TV Comic strips within the mythology perhaps because they’re so beloved by old fans. The New Adventures suggested Dr Who was a creation of the Land of Fiction and here’s the comic strip with its theory that he’s having the adventures the Doctor dreams of having in comparison to the horrible reality of what he otherwise lives. Personally I’ve taken the same approach with them as everything else. You don’t need to explain them. Within the “mythology” they happened to a version of the Doctor in some version of the time stream, just as there’s been a human Dr Who too who looks like Peter Cushing. At a certain point he called himself Dr Who and had grandkids. Then "reality" changed and he didn’t.

Bad Blood

The fake-out story seems to have been used in all three spin-off media but not quite in the same way as here, where a story at first appears to be another one of the stand alones, in this case a celebrity historical about Sitting Bull and Custer and werewolves (a pitch which almost writes itself) as had been prevalent recently before dropping a dollop of the overall arc in towards the end of the second part. What’s impressive is that the script then doesn’t sideline either of the historical figures, making them central to the story and indeed somewhat the moment when Destrii is handed a first glimpse at redemption. This is of course in contrast to Let’s Kill Hitler which would later sideline its historical figure, the one which even features in the title, entirely on purpose, in a cupboard, just to be funny.

Sins of the Father

Because every fantasy franchise at some point ends up with a story called Sins of the Father. The difficult penultimate story doesn’t feel that way at all, which as I discovered with the novel To The Slaughter seems to an element of this format, the sense of being part of a continuing adventure even when a section of it is ending. The introduction of Destrii is problematic in these circumstances though. Rather like Ensign Ro in Star Trek and a dozen characters in The West Wing, a lot of energy is expended integrating her into the TARDIS crew even though she’ll ultimately only appear in one more story, albeit one which is ten episodes long and quite important in the development of the franchise ongoing. Perhaps at some point Big Finish could offer us a further adventure in audio form?

The Flood

Reader, I just sobbed. I was listening to the 50th Anniversary compilation while reading and wouldn’t you know but Murray’s Song of Freedom turned up during the final few pages and I was hit with a wave of emotion, remember what it was like to read these final pages in 2005 on the eve of the show returning to television, the sadness of Eighth’s tenure ending mixed the revolution in the air and well yes, there we are. When writing here in 2013 about the commercially heroic decision not to include the regeneration, little did I realise how important that would end up being in at the 50th when the War Doctor emerged. Because we know that if a regeneration for eighth to ninth had been shown in the comic, sanction by RTD, Moffat is too much of a fan to contradict it. Bye then Eighth, for now. I'll hear you again in Mary’s Story.

Are There Two Missing Planets In The Solar System? by Astrobites

Title: Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of trans-Plutonian planets
Authors: C. de la Fuente Marcos and R. de la Fuente Marcos
First Author’s Institution: Universidad Complutense de Madrid, Ciudad Universitaria, E-28040 Madrid, Spain

For years astronomers have wondered if there might be more planets in the Solar System, far beyond the orbit of Neptune. Although a survey made by the WISE spacecraft showed that there are no large gas giants left to be discovered, a recent finding has prompted the authors of this paper to propose the existence of not one, but two smaller planets in the outer Solar System.
Announced in 2014, the finding in question was 2012 VP133, a tiny planetesimal no more than a thousand kilometres across discovered by astronomers at the  Carnegie Institution for Science. 2012VP 133 will never get closer than 80 au1 to the Sun, and will swing out to more than 240 au  over the course of its orbit. Only one other Kuiper Belt Object (KBO), the slightly larger Sedna, has been found at these distances. What made the discovery most interesting was that 2012 VP133 and Sedna have curiously similar orbits.

One unusual parameter of their orbits in particular is the argument of perihelion, the angle between the point where the object is closest to the Sun (its perihelion) and the ecliptic, the plane on which the Earth and the other major planets orbit. Given that almost any orbit is possible for small objects that far out, you’d expect this value to be different for every one. But  both 2012 VP133 and Sedna have arguments of  perihelion close to one unique number: zero. This means that when these KBOs are at their closest point to the Sun, they line up exactly with the orbit of the Earth and the Sun.

And its not just 2012 VP133 and Sedna.  A suspiciously high number of closer KBOs also have arguments of perihelion close to zero. The authors wanted to find out if these discoveries were simply the result of observational bias, or if the findings signal that something else is going on.

To do this they ran two simulations, creating an artificial population of KBOs with a wide range of possible orbits, along with the Earth. Assuming that KBOs are most likely to be discovered at their closest approach to the Earth, they plotted the positions in the sky where each KBO would found (Figure 1). Running the simulation until the artificial Earth had made twenty million orbits, they looked for the locations where KBOs are most likely to be found.  Plot D of Figure 1 reveals the bias- the majority of objects will be found with 24 degrees North or South of the celestial equator.

Figure 1: The results of the simulated solar system, showing the distribution of the computer generated KBOs. The plots show the locations of the KBOs as a function of: A. closest approach to Earth; B. semi-major axis; C. eccentricity; D. inclination; E. longitude of ascending node; F. argument of perihelion. The green circles show the observed KBOs.


Having quantified the  bias. they could then run a second test. This time, they tried to find out the distribution of KBO orbits that, given the bias that they now knew, we should expect to see. If this matched the observed clustering of the argument of the perihelion, then they would have a simple explanation for this apparent oddity. They found that the clustering couldn’t be explained by an observational bias— the oddly similar arguments of perihelion were real. Something had to be shaping the orbits of the KBOs.

With this established, the authors looked for possible explanations for these unusual orbits. The most likely explanation is the Kozai mechanism, where the orbit of a smaller object is shaped by a larger one, orbiting further away from the Sun. One particular feature of the Kozai mechanism is that argument of perihelion of the smaller object is held around a small range of values, exactly what was needed to explain the behaviour of the KBOs.

The astronomers who discovered 2012 VP133 had suggested just that, hypothesizing that the orbits of both their new KBO and Sedna were being shaped by an undetected super-Earth (a planet roughly twice the size of our own). The authors of this paper go one step further: The orbits of the KBOs can be best explained by two super-Earths, one at around 200 au and another at roughly 250 au. They would be locked in a orbital resonance, where the inner planet goes round the Sun three times for every two orbits of the outer world. The gravitational interactions between these planets and the KBOs would force them onto the unusually similar orbits that the finders of 2012 VP133 had observed.

So are there undiscovered planets waiting for us to find them? This would be particularly exciting as there are no super-Earths that we know of in the Solar System, but they appear to be abundant around other stars. However, the authors note that their conclusions have one huge caveat: small number statistics.

The calculations suggesting the existence of unseen planets is based on observations of only 13 KBOs, few enough that the clustering could be down to simple chance. If future observations find many more KBOs that don’t fit the pattern, then the case for missing planets will disappear. If, on the other hand, they do match what has already been seen, then their behaviour could be used to narrow down the locations of the unseen super-Earths, giving future astronomers the information they need to find them. The authors finish by noting that, if these super-Earths are there, then the effects of their gravity could be detected by the New Horizons spacecraft, currently approaching Pluto for our first close look at a KBO.

1. au=Astronomical Unit, roughly the distance between the Earth and the Sun.

January 19, 2015

Sitrep by Charlie Stross

I have not been blogging much lately because I have been a bit busy. "The Annihilation Score" (Laundry Files book 6) has been copy edited and is on course for publication in the first week of July, and I'm now about a quarter of the way into writing "The Nightmare Stacks" (Laundry Files book 7). This is a priority right now, because on January 28th I'm off to New York and Boston for my annual winter trip (and expect to come back with a bunch of edits to process on the new Merchant Princes trilogy). As my literary agent and my US publishers are all based in New York, and there's an SF convention—Boskone—in Boston, it's really a work thing, but I'm going to find time to send up the bat-signal for a brewpub evening in both cities: watch the skies, or this blog entry, for details.

Read below the cut for my itinerary and Boskone program items.

(Oh yes, one other thing. This is the time of year for Hugo nominations. 2014 was a bit of an odd year for me, insofar as I published just one piece of Hugo eligible fiction. It's a novel, an earlier work in the same series won a Hugo last year, and that's all I'm going to say. I am going to try to get off my arse and write a bit more short fiction over the next year or two, though, so things will be more interesting next year.)

My schedule:

Pub evening, New York: venue and date TBA, but some time from February 1st to 6th inclusive.

Pub evening, Boston: venue and date TBA, but some time from February 8th to 12th inclusive, or February 17th.

Boskone 52 schedule:

You can find the entire convention program here. My items are as follows (Everything takes place in the Westin Waterfront, the convention venue; Harbor II, Griffin, and so on are meeting rooms):


Friday 16:00 - 16:25, Griffin (Westin)

(NOTE: I am open to requests. Want some of "The Annihilation Score" (Laundry Files book 6), "The Nightmare Stacks" (Laundry Files book 7), "Dark State" (Merchant Princes book 7), or something else? Vote here!)

Panel: Angels, Demons, and Saints

Friday 19:00 - 19:50, Harbor II (Westin)

Panel: Finding Diverse Fiction

Saturday 12:00 - 12:50, Marina 2 (Westin)

(I'm moderating this panel rather than having opinions of my own)

Kaffeeklatsch: Charles Stross

Saturday 13:00 - 13:50, Galleria-Kaffeeklatsch 1 (Westin) (NOTE: based on previous years, if you want to attend you'd better sign up early to avoid disappointment: there are a limited number of places)

Panel: The Alien

Saturday 16:00 - 16:50, Burroughs (Westin)

Panel: Nifty Narrative Tricks

Sunday 11:00 - 11:50, Harbor I (Westin)


Sunday 13:00 - 13:50, Galleria-Autographing (Westin)

NESFA Book Club: Neptune's Brood

Sunday 14:00 - 14:50, Harbor II (Westin)

January 18, 2015

What is a spoiler? by Feeling Listless

Film Charlie "@ultraculture" Lynne's new documentary about 90s/00s teen films Beyond Clueless is in release. There's a trailer for it above. Lynne's given this interview to The Double Negative about it.

I'm really quite interested in seeing it, but reticent. Because of spoilers.

We've discussed spoilers before but my own approach to them has become increasingly hard line, which is difficult in our media saturated, blah, blah, publicity, blah world.

Essentially, since it's good to keep a barometer of these things, here's the point I've reached.

A spoiler is anything about a film. Pretty much. Yes, anything about a film.

Since it's entirely impossible not to know something about a film before watching it, I have to relent slightly.

My optimal state right now is that I'm happy to know the title, some of the stars, perhaps the director and whatever's on the poster. Oh and I may have heard Kermode's review months before but forgotten everything he said other than if he was positive. Oh and did it win an award?

If it's a mega franchise, the teaser trailer is just about fine because it's a disjointed group of images, a teaser, if you will.

Nothing beyond that if I can help it.

Here's why.

Over the years I've read dozens of articles and a few books about the experience of attending film festivals.  Each and every one of them talks about the excitement of sitting down in front of a film on what may be its first appearance in front of an audience.  Unless its been previewed, or shown to friends of the artists if its a small budget piece, that audience are the first eyes to see those images, the first ears to hear those sounds.

More often than not the film-goer in question will know little about what they're seeing.  The title, some of the stars, perhaps the director and whatever's on the poster.  Unlike the audience it may eventually see, that film-goer hasn't experienced the media saturated, blah, blah, publicity, blah world to come.  There might be some buzz but nothing much else.

I want that.  I love that.

It's also a backlash to what happened during my film degree all those years ago when many a film was spoilt by some piece of film criticism because in order to talk about a film you have to talk about the whole thing.

Also against the number of occasions in which I've absorbed the media saturated, blah, blah, publicity, blah world of a film which in the end became somewhat beside the point and ultimately a bit disappointing and empty.

So I avoid the full trailers.  And interviews unless they're print and I can skim.  Whole sections of Empire and Sight & Sound magazine go unread for months, especially the reviews. Oh hold on, add a star rating to the above list.  I'll glance at those at least.

As few preconceived notions as I can have before I watch a film, the better.

It's brilliant.

Of course I have my own prejudices and it doesn't mean I'll watch anything.  You can tell a surprising amount from a poster, for example if it's an Adam Sandler film in which he's trying to be funny rather than droll or if it looks like a "harrowing portrayal" of something.

Last night I Netflixed Draft Day, Kevin Costner's return to the sport film.  I knew it was about American Football and he was in it because that was what was on the poster and I remembered seeing his photograph on the review in Empire.

Even though I couldn't really follow the plot because there was little interest in hand holding anyone who doesn't already understand the vagaries of choosing a Quarterback for the NFL, I thoroughly enjoyed myself because the whole thing was a complete surprise including other cast members.  I even applauded when one of them appeared.

Like I said, it's brilliant

Which brings us back to Beyond Clueless.

The problem with Beyond Clueless is that it's a film about films and in order to talk about a film you have to talk about the whole thing which means there are bound to be films in there I haven't seen which will have the plot explained which is too much information.

My first reaction was to look at the IMDb to see if Lynne and his associates had uploaded a list of the films mentioned with a view to catching up with the films I haven't seen before watching the documentary.

They have and there's a lot of them.

Except in glancing at the list, I'm getting to see the films Lynne talks about in Beyond Clueless.  In other words, I'm spoiling Beyond Clueless's potential surprises by looking at a list of the films he mentions.

Which is a pickle.

Do I ...

(a)  Watch Beyond Clueless and hope to god that it doesn't spoil too many unseen films in the process


(b)  Watch all the films I haven't seen in this list and hope to god that it doesn't spoil Beyond Clueless in the process

No idea.  In the end I expect it'll be

(c)  Begin watching Beyond Clueless when it arrives on dvd or tv or wherever and shut it off at the first sign of trouble then go and watch some of the films on the list anyway.

Apart from American Pie Presents Band Camp which sounds rubbish.

How to refer to mapping? by Simon Wardley

I often get asked what IP exists around the mapping technique that I talk about? Well, it is IP protected under Creative Commons 3.0 Share Alike. This means you are free to use it, free to modify it and free to distribute it as long as you use the same license. I'm not in the business of trying to extract rent from people for drawing maps, I'm in the business of helping others run their organisations more effectively.

This does leave a question of attribution. There are numerous forms that I'm happy with. As long as its similar to this in spirit then I don't grumble.

1) Wardley Mapping
If you call the technique "Wardley Mapping" then this is fine and there's no need to attribute further in presentations and talks. I happen to quite like this term these days to avoid confusion with other 'value chain / stream' mapping techniques that are not equivalent. 

If you want to add a more explicit attribution line (e.g. you're writing an article) then "courtesy of Simon Wardley, CC3.0 by SA" or "courtesy of Simon Wardley (LEF) CC3.0 by SA"  or even a simple "courtesy of Simon Wardley" is appreciated. 

2) Wardley - Duncan Mapping
If you call the technique "Wardley - Duncan Mapping" then this is fine as it pays homage to James A. Duncan who was instrumental in the early stages of mapping. If you want to add a more explicit attribution line then any of the above are fine.

3) Value Chain Mapping 
This is an alternative name of the technique and if you're going to call it this, then ADD an attribution line. Since the LEF does provide me time to promote and teach others how to map, I have a preference for the attribution "courtesy of Simon Wardley (LEF) CC3.0 by SA".

The only thing that will actually annoy me is seeing the technique without any reference to my name. Since, I've put the hard work in over the last decade then I'd be grateful if you could at least mention me when using it.

Reader, I Married Him. by Feeling Listless

TV Another Off The Telly uplift.

Reader, I Married Him

Tuesday, September 19, 2006 by Stuart Ian Burns

One of the criticisms of the documentaries that accompanied the BBC’s Big Read event was that, in places, the autobiography of the celebrity advocates overshadowed the books they were championing. Reader, I Married Him takes a similar approach to an entire literary genre, the romantic novel, but finds a much clearer middle ground as Daisy Goodwin attempts to convince the viewer the books she loves are worthy of attention without resorting to reconstructions of desolate nights in student bedsits or dressing up in bodices.

Goodwin is a beguiling presenter; although previously seen on screen most prominently fronting the Essential Poems series (linking films of celebrities acting out verses), she’s perhaps best known as a poet and in the television industry as a producer or editor of the British version of The Apprentice and Channel 4 property shows such as Grand Designs and Property Ladder. Like Sarah Beeny, she has the rare ability of keeping the audience’s attention without resorting to shouting – her low, slightly sensuous delivery perfectly gauged considering the subject matter.

Her interview style is infectious, reacting to answers in a pleasingly natural way, the best moments occurring when Goodwin genuinely appears to be learning something new about her subject at the same time as her audience (bringing to mind Michael Wood or Mark Moskowitz, director of the seminal film about discovering books, The Stone Reader). Her giggling during Jilly Cooper’s revelations regarding the difficulty in writing sex scenes as she knocks on in age being one treasure. She also seems to have a genuinely open mind – whilst visiting the Mills & Boon offices there is real surprise in discovering the steamy content of the books, who the readership is and how much they’re prepared to pay for their fix. Attending a writing class, she tries her hand at penning a “typical” passage from one of these novels and is self deprecating about the results, the flirty young woman meeting the shepherd.

As Joanna Trollope, Erica James and Celia Breyfield offered their opinions, some of which were oddly defensive (“I don’t write romantic novels,” said Breyfield in this documentary about romantic fiction), none really captured the essence of why the genre is so popular past the expected “it’s a bit of escapism”. The public squirming at a lurid sex scene during an author’s reading was hardly balanced out by a critic explaining that Catherine Cookson was pleased she’d treated one of her books as a serious piece of literature.

More refreshingly, the documentary attempted to treat all of this fiction on a level playing field, giving as much attention to those Mills & Boon as the classics. Two fans were seen enthusing over boxes full of books, salivating and giggling as they read the cover blurbs and the variety of different stories and product lines were revealed. This was perhaps the most interesting revelation to anyone who assumed these things were all the same.

If there was a problem, it was that even though the documentary had been billed as a passionate argument for romantic fiction, and although there was certainly much conviction, it lacked a through narrative and couldn’t quite decide the audience it was aiming for – someone in the apparent 40% of people who are already in the readership, or doubters who see a pink cover on the shelves and buy the latest football biography instead. By attempting to find a middle ground the programme lacked focus. Whilst the former will no doubt be excited to be able to put faces to the names of the authors whose novels are stacked high on supermarket shelves, the latter (of which I count myself) were left to wonder why such work had garnered a large readership in the first place.

The primary omission was in regard to the content of the books, the nuts and bolts of what to expect from the genre. Whilst the second and third programmes will concentrate on the romantic hero and heroine, this opening edition was long on experts and readers expressing the emotions and feelings they glean from the novels, but short on mentions of particular characters or situations. This might have been an editorial decision because of the sheer size of the genre, but some pointers on what to expect might have been useful. There were fragments; in one section there was some talk of how romance has crossed over into crime fiction or is smuggled into war novels.

But for something that was supposed to be challenging the received expectation of what the plotlines in these novels are about, the non-40% will still be left with the girl meets boy, complication, boy falls for girl model, when the few tantalising tidbits that did creep through suggested stories that are far more complicated than that.

Precious little could be found on the history of the genre. Although the popularity of Mills & Boon amongst war widows during the 1920s was expanded upon, the roots of the stories and key early texts could only be glimpsed on book jackets in passing. The title of the series wasn’t even explained – a web search reveals it’s from Jane Eyre, something fans might be aware of but confused this layman. Hopefully this will be extrapolated upon in future episodes.

There were also disappointing lurches into conventionality and cliché – the clip from Little Britain to help illustrate who Barbara Cartland was when a perfectly good and revealing interview between the author and Melvyn Bragg seemed to tell that story perfectly well. The first reading crept up from Bridget Jones’s Diary, which was inevitably followed up with a clip from the film; and oh look lots of extracts too from the Andrew Davies television adaptation of Pride and Prejudice (oddly crosscut with comments from Deborah Moggach, credited as screenwriter for the recent Working Title film version).

Another niggle was an apparent two tier approach to contributions. Whilst writers and journalists, actors and screenwriters or “the names” appeared in full screen, members of the public, the life models and students were given a tinier amount of screen space with a giant black border around them as though their opinion was less important.

One of the more bizarre passages involved Goodwin taking a science test to see if romantic fiction could actively calm her during two stressful days at work. On each day she took a saliva sample before and after an hour of either work or sitting back and reading a novel and these were later taken to a university laboratory to see if on the second day her stress hormones had decreased. Without warning, the viewers suddenly found themselves in an episode of Horizon and Goodwin’s voiceover descended into a stream of technobabble during one of the only moments when she actually seemed slightly unsure of what she was saying – and inevitably neither did we.

Unsurprisingly, the test proved that yes, indeed reading romantic fiction during that hour did show that her stress was reduced – although the likelihood of this was increased because she revealed that she’d actually fallen asleep! Problematically however, the science wasn’t questioned and although this was no doubt supposed to be a bit of fun, it had the effect of derailing the proceedings, and this viewer wondered if the same result might have happened if Goodwin had been reading any kind of fiction, simply because she wasn’t y’know, working and in fact having a break. The contribution from a psychoanalyst didn’t really seem to give too many answers either.

The programme was far more comfortable and perhaps most engaging in the section dealing with the marketing of the books. Time was spent at a jacket meeting at Harper Collins as the experts discussed, for the benefit of the cameras, the relative merits of cover ideas and how they change depending upon the author and the market. This was juxtaposed with a reader assessing covers in a branch of Waterstones, pulling books off the shelves and explaining how certain visuals such as the colour red and a beach will attract her rather than a colour photo of the author grinning out from the dust jacket.

We learned that Tesco say that on average a sleeve must attract a potential reader in three to five seconds which demonstrates why author names and titles in big lettering and simple, symbolic pictures are currently in vogue. Headline Books are rebranding the (out of copyright) works of Jane Austen with such covers and accompanying blurbs that highlight the romance (two Asda checkout workers were shown reading these and trying to guess the author), something that Moggach described as vulgar but as the Headline spokesman explained most current editions look like academic textbooks (well apart from the film tie-ins) and if they bring the classics to a new audience that can only be a good thing – something Goodwin appeared to agree with.

Despite the niggles, this was still a very appealing documentary even if by the closing moments the non-40% might not be entirely convinced to drop their “books about guns” and try something with a female point-of-view. One of the inherent problems with this type of programme is that the thesis is stretched out over a number of weeks and episodes and there needs to be enough to hook the viewer in until the end. But admittedly, on this occasion with Goodwin as a guide most of the work was done. Perhaps, however, it would have been better to concentrate more on enthusing about the key titles, particularly the modern classics of the genre, rather than shoehorning in so many gimmicks.

TEDxNewYork: BIG and BOT Policy Proposals by Albert Wenger

In my talk at Techconomy Detroit I proposed two policy measures for dealing with the transition to the information age: a basic income guarantee (universal basic income) and what I then called the right to an API Key.

At the wonderful TEDxNewYork I was given an opportunity to expand on both of these ideas and I am now calling the second on “the right to be represented by a bot.” Here is the video:

On (not) buying a digital camera by Vinay Gupta

Canon EOS M or Canon SL1 (called 100D in the UK)

The Canon EOS M is a DSLR sensor and electronics package wedged into a camera the size of an S110 or a Sony RX100. The big lens still sticks out the front, though. But it’s tiny. The SL1 is the same sensor in small, light DSLR body – with better autofocus. The EOS M had a persistent autofocus problem so is now £200 quid.

Rough comparison

Size comparison,448.377,312.377,ha,t
(that’s the 5D, the SL1 and the EOS M)

Absurdly detailed comparison

Exactly the same sensor. Both have a 3.5mm microphone in. Bodies and processor are different. The EOSM is *tiny* (compact camera with huge lens), a DSLR-in-a-can, and the SL1 is simply a very small DSLR. Same sensor, very similar electronics.

EOS M *does* work with Magic Lantern. Really well. Has shitty autofocus: 0.75 seconds to get a lock. Otherwise, absolutely great.

SL1 also appears to support Magic Lantern. Does not have shitty autofocus.

EOSM plus an adapter ring takes canon EF lenses.

SL1 takes the lenses straight.

Buying options

EOS M stock lens (3x zoom) at Argos £200

EOS M w. 22mm lens (F2) and adapter ring for Canon EF glass £350

SL1 with huge range of lens options

Note that you really need the IS STM lens for shooting anything which isn’t off a tripod – image stabilized with stepper motors so that changing focus is silent for shooting video.


In the end, I realized that I did not personally need to own the camera – my existing point and shoot is good enough for youtube, I don’t have the time/energy/software to do much video editing, and I’m a lousy still photographer.

I did need to borrow a real camera occasionally to record talks for work – and that’s what I’m going to do. Simple!

flattr this!

Delphic Dwarfs and the Nature of Dark Matter by Astrobites

Title: All about baryons: revisiting SIDM predictions at small halo masses
Authors: A. B. Fry, F. Governato, A. Pontzen, T. Quinn, M. Tremmel, L. Anderson, H. Menon, A. M. Brooks, J. Wadsley
First Author’s Institution: Astronomy Department, University of Washington, Seattle
Status: Submitted to MNRAS


The Milky Way does not keep a solitary sentinel over our corner of the universe. Our home galaxy is surrounded by a large family of small siblings, its dwarf galaxy satellites. Dwarf galaxies are a unique lot; they have substantially smaller masses than the Milky Way, ranging anywhere from hundreds to hundreds of thousands of times smaller, and can be incredibly faint, which makes them a challenge to find (for more on how this is done for an extragalactic dwarf, see Elisa’s bite). The faintest dwarfs (the Ursa Major II Dwarf, for instance), inhabited by as few as hundreds or thousands of stars, are only a few thousands of times more luminous than the Sun, making them fainter than some the most luminous massive stars we’ve observed. What they lack in mass and luminosity, however, they make up for in number. Dwarfs are the most common type of galaxy that you can find; the Milky Way alone has over 20 dwarfs, and our sister galaxy, Andromeda, has over 30. Their relative faintness, given their mass, imply that a large fraction of their mass is dark matter—in fact, they are believed to be the galaxies with the highest dark to baryonic mass ratio in the universe (often over 90%!). This makes them the cleanest laboratories for testing our understanding of dark matter physics.

Figure 1. Core vs. cusp. Dark matter-only simulations predict that the central densities of dark matter halo are peaked, or “cuspy.” Such a profile is shown here in blue. Observations of dwarf galaxy indicate that they live in halos with a flatter central density profile, or are “cored,” shown here in red and black.

In fact, dwarf galaxies motivated one crucial outstanding problem with our standard picture of dark matter, or cold dark matter (CDM): the so-called “core-cusp” problem.  CDM is described as “cold” because CDM particles were non-relativistic when they formed in the early universe; CDM is also collisionless, interacting with fellow dark matter particles through only the long-range gravitational force. Dark matter-only simulations of the universe predict that the dark matter halos in which galaxies are found should exhibit a peak or “cusp” in their central densities. Observations of nearby dwarfs, however, have instead revealed flat central cores (see Figure 1). One promising method of producing cored dwarfs questions the assumption that CDM is collisionless: if dark matter particles could interact with other not just through gravity but also through a short-range force, i.e. were “self-interacting,” the resultant collisions would allow particles to exchange kinetic energy and angular momentum and thereby smooth out central peaks. Limits on the strength of these self-interactions are can be derived by measuring the sizes of dwarf cores, as stronger self-interactions produce more frequent collisions and thus larger cores.

Figure 2.

Figure 2.  Similarity of CDM (blue) and SIDM (red) central densities in DM-only simulations of low-mass dwarf galaxies.  Vmax is a proxy for halo mass.  Dwarfs with Vmax < ~30 km/s (corresponding to masses < 1010 M) have similar core densities, regardless of whether the dark matter is self-interacting.  More massive SIDM galaxies form a lower density core as can be seen by the offset of the red points from the blue points at Vmax > ~30 km/s.

However, it appears that there may be some caveats that could limit the usefulness of using low-mass dwarfs as a test for self-interacting dark matter (SIDM). The authors of today’s paper investigated two in particular. They first considered whether the central densities of low-mass dwarfs were so low that dark matter self-interactions would not occur frequently enough to produce cores despite assuming strong self-interactions. To test this, the authors turned to N-body simulations of dwarfs, which they ran with high mass resolutions in order to accurately resolve the low-mass dwarfs. To maximize the effects of dark matter collisionality, they assumed relatively strong self-interactions, which is typically quantified by the self-interaction cross section σSI/mΧ, where mΧ is the (unknown) mass of the dark matter particle. Observations of merging clusters of galaxies limit the cross section to be about 0.1 – 1 cm2 g-1; the authors adopted a cross section at the upper end of this range, 2 cm2 g-1. They found that SIDM galaxies with masses less than 1010 M did not exhibit cores (see Figure 2) and that this was indeed due to the lower dark matter densities at the centers of these galaxies.

The second caveat to using dwarfs as probes of SIDM lies in the fact that the matter in our universe, of course, is not all dark. In these dark matter-dominated dwarfs, baryons can contribute as little as a few to a few tens of percent of the total mass. However, the authors found that including the baryons when modeling dwarfs of order 1010 M could produce observed dwarf core sizes and central densities as well as observed stellar and gas distributions. In addition, it appears that when baryons are included, SIDM and CDM halos are nearly indistinguishable.

Thus low-mass dwarfs may not be particularly sensitive probes of DM physics after all, and their cores may find a natural explanation in baryons. What does this mean for SIDM? Is it destined to led to the back door, shoved out and discarded among the remnants of other once-promising but ultimately unsuccessful cosmological theories? It’s unclear. There are many uncertainties inherent in this sort of study: was the correct self-interaction cross section used?  Is our understanding of how baryonic physics affects dark matter structure accurate?  Other simulators have also modeled SIDM dwarfs with baryonic physics and found differences in the cores sizes of SIDM and CDM halos; this may be due to the smoother star formation histories (as opposed to the many quick short bursts the authors of this paper saw in their sims) that their prescription for the baryonic physics produced. The authors argue that the observed outflows and stellar populations of dwarfs agree with the more bursty star formation produced in their models.  There’s a lot more work to be done before we can make any definitive conclusions about SIDM.

If ultimately, baryonic physics can produce substantial cores in dwarfs and that realistic SIDM and CDM halos with baryons look identical, then some other exciting new doors are opened. If strong self-interactions can reproduce the observed properties of dwarfs when baryons are included, could the self-interactions be even stronger than we dared imagined?  It’s also possible that SIDM could return, reincarnated, but with a velocity-dependence that produces weak self-interactions in massive systems (as required by observations of galaxy clusters), but strong self-interactions in dwarfs. To explore these possibilities, we shall have to wait for new observations of ever fainter dwarfs and further numerical work on SIDM.

Sky survey grant helps lead to a space science career by The Planetary Society

Quan-Zhi Ye was an 18 year-old college student and the principal investigator of the Lulin Sky Survey when he won a 2007 Shoemaker NEO grant. He's now a Ph.D. candidate and provides an update on his work in meteor studies.

Beagle 2 found? by The Planetary Society

What happened to Beagle 2? It's been a mystery for 11 years. That mystery appears to have been solved.

Watch the Incredible 'Rapid Unscheduled Disassembly' of SpaceX's Falcon 9 Rocket by The Planetary Society

SpaceX CEO Elon Musk has released four images of the company's Falcon 9 rocket impacting its drone ship landing pad in the Atlantic Ocean.

January 17, 2015

David Cameron in 'cloud cuckoo land' over plans to deify security services by Simon Wardley

The prime minister’s pledge to give security services omnipotence is ‘crazy’, experts say. Religious and security specialists have told our gallant reporters that David Cameron is “living in cloud cuckoo land” when he suggests a new Tory government would ascend ordinary intelligence agents to a new pantheon of Gods. 

Independent expert Ive Noclue said: “It’s crazy. Cameron is living in cloud cuckoo land if he thinks that this is a sensible idea, and no it wouldn’t be possible to implement properly. You can't just go around turning ordinary intelligence officers into divine beings and creating a new omnipotent pantheon. This is obviously what the Tories are planning to do and it's not sensible."

Other security experts echo Noclue, describing the approach as “idiocy” and saying Cameron’s plans are “ill-thought out and scary”. The UK’s data watchdog has also spoken out against “knee-jerk reactions”. One expert privately voiced concerns that "Many of today's problems occur from an already diverse pantheon of Gods. Adding more could exacerbate the problem. This Tory policy is just downright wrong. What happens if these new Gods start competing religions? What happens if one of these new Gods decides they're going to change reality or has a bad day and starts lobbing lightning bolts around? The consequences of this should have been thought through".

Meanwhile a start-up has warned on the possible effect on Britain’s nascent technology sector of Cameron’s plans. BurnYourCash has said it is already making plans to leave the UK if the Conservative party is re-elected with this policy of creating new Gods in its programme. 

On Monday, Cameron made a speech in which he decried the ability of ordinary people to have conversations on which the security services were unable to eavesdrop.“In extremis, it has been possible to read someone’s letter, to listen to someone’s call, to mobile communications,” Cameron said. “The question remains: are we going to allow a means of communications where it simply is not possible to do that? My answer to that question is: no, we must not.”

Noclue said "It is obvious from Cameron's statement that he plans to turn ordinary intelligence officers into divine beings giving them omnipotence and impacting the whole religious, social and economic order of the world."

Other experts have thrown cold water on the plans, “Yes you can pass laws in Westminster until you’re blue in the face but it's not practical” said Renta Os, another expert in the field. “Citizens, businesses, and nation states need to protect themselves but creating a whole new pantheon of Gods is just not the answer. Cameron’s plans appear dangerous, ill-thought out and scary."

This new debate has political implications with alternative parties claiming "We will not be making any new Gods as part of our policy programme. That's our pledge to the public. Your Gods are safe in our hands". Our reporters were unable to get a response from the Conservatives, providing further clear evidence of the nefarious plans.

Off the record, one political commentator did state "I don't think Cameron said he was going to create new Gods, did you check the text? I think you're extrapolating wildly. It's about as daft as saying he intends to ban encryption. He didn't say that either."

For more on this breaking story ...

Ten More Links and a Video. by Feeling Listless

6 things you don’t understand about feminism that you should probably learn before writing about feminism:
"Why oh why do so many American liberal media platforms insist on publishing articles about feminism by people who are totally clueless about feminism? There are so, so many feminists who know what feminism is, it seems unnecessary. There are also hundreds of books and resources that you could simply, you know, read if you were interested in writing about feminism with any accuracy whatsoever. But why bother learning about what you are writing about when you can just make it up as you go along! Especially when you know your fellow American liberals are eager to take in any and all ideas that support their lazy disinterest in challenging the status quo or memememeME! worldview."

Speaking While Female:
"YEARS ago, while producing the hit TV series “The Shield,” Glen Mazzara noticed that two young female writers were quiet during story meetings. He pulled them aside and encouraged them to speak up more. Watch what happens when we do, they replied. Almost every time they started to speak, they were interrupted or shot down before finishing their pitch. When one had a good idea, a male writer would jump in and run with it before she could complete her thought. Sadly, their experience is not unusual.

Help us create a directory of the UK’s recorded sound collections:
"Sound recordings help us to understand the world around us. They document the UK’s creative endeavours, preserve key moments in history, capture personal memories, and give a sense of local and regional identity. But the nation’s sound collections are under threat, both from physical degradation and the disappearance of the technologies that support them. To understand the risks facing the UK’s sound collections and to map the scale of the problem, the British Library is initiating a project to collect information about our recorded heritage, to create a directory of sound collections in the UK. By telling us what you have, you can help us plan for their preservation for future generations."

How Old VHS Tapes Helped Save Early Web Design:
"Conventional wisdom has it that anything published online can never be truly erased. People petition governments for the "right to be forgotten"—to have personal information and images permanently removed from the Internet. But look for a screenshot or image from a page of the very early web, and you'll find it almost impossible to locate. Prominent technologist Andy Baio, who runs the site, where he promotes tech ephemera and news, has discovered an unlikely portal to an era that has all but disappeared from today's Internet, and quite nearly from the human record: VHS tapes. With these tapes, now viewable on YouTube, comes a critical look into a period that set the stage for the massive design and technological changes society has undergone over the past 20 years."

The year of post-it-notes and mindfulness:
"For a nethead and digerati like myself, 2014 was a year of ironies."

The World's Oldest First-Grader Is Honored By A Google Doodle:
"The Google doodle for Kenya today shows a white-haired man at a table in a primary school, earnestly writing a classroom exercise. The kids behind him grin as if to say, "He is kind of old to be a first-grader.""

If Hermione Were The Main Character In “Harry Potter”:
"Hermione Granger and the Goddamn Patriarchy."

Video Games and the Curse of Retro:
"Video games are more prone than other media to obsolescence. With each new generation of hardware and software, scores of titles are made unplayable. Music has suffered similarly, of course: vinyl morphed into cassette into CD into digital audio. But music, like films and books, is easily transferred to new formats. Video games, which rely not only on audiovisual reproduction but also on a computer’s ability to understand and execute their coded rules and instructions, require more profound reconstruction. Without a strong commercial incentive to maintain their back catalogues, many publishers allow games to drift into extinction."

Google Translate Now Rewrites Foreign Signs Before Your Eyes:
"Google's Translate app is already pretty neat, allowing your to snap photographs and have them converted into one of 36 languages. Now, though, you can point your camera at a sign or menu and see the translation in real time."

Why Star Trek: Voyager Meant The World To Me:
"Twenty years ago today, Star Trek: Voyager premiered. Of the five live-action Star Trek series, Voyager is not the best. If you were ranking them, Voyager and Enterprise would probably duke it out for last place. But none of that matters, because Star Trek: Voyager meant everything to me as a child."

January 16, 2015

The use of SWOTs in strategy by Simon Wardley

Two simple graphics ...

SWOT is a useful analysis tool for determining whether a particular or combination of attack / defence points have any potential. In order to understand where you can attack or defend you first need to map the environment, identify likely competitor moves and then determine your options. After this you can apply your SWOTs to evaluate options.

However, once you have determined your most favourable option then you still need to determine whether it is feasible i.e. you need to examine resource requirements, how you're going to achieve this etc.

When developing a business strategy, the order I use is ...

1) Map
2) Determine options (wheres)
3) SWOT each option
4) Determine preferred option (the why as in why here over there)
5) Business Model Canvas on preferred option (feasibility)

I don't see a great deal of point in use in tools like SWOT if you're not going to first map out the landscape.

Getting stuff done by Simon Wardley

You've been given some project / business line / whatever to examine and get going. How do you start?

The following steps are what I find useful. If you're working in a large organisation then based upon experience, I can confidently state that if you follow the steps you'll be able to reduce costs significantly in the organisation over time (i.e. as ridiculous as this sounds, 90% reductions are not unheard of). You'll more effectively capture markets against competitors, you'll reduce failure, improve communication and you'll increase your speed of delivery.  I don't expect you to believe me and I'm not going to waste my time trying to convince you. Talk it all with a pinch of salt but ... have a go.

This is more of a refinement for those who have already started mapping.

1) Needs!
Start with user needs. Identify what the real user needs are. DON'T start with what your needs (e.g. make profit, be successful) ... start with the end user.

2) Value Chain!
Now work out what components are needed to meet those needs and create a value chain.

3) Map!
Now map it by adding evolution. NB, I use the terms uncharted to industrialised to describe the different areas of the map (the old lingo was chaotic to linear). These different areas have polar opposite properties and everything evolves from one to the other.

4) Challenge!
Now compare your maps to other maps for other projects or lines of business in other departments. Look for the clusters i.e. in the following diagram, the everyone treating the activity in the same way - the red dots - with you standing out on a limb - the green dot. I collate my maps to find common activities in multiple maps and produce the distribution chart shown below. I find this very useful in tackling bias in a group e.g. the insistence that their ERP is a rare and poorly understood activity requiring a custom built system.

Don't skip this step - there's nothing more annoying in a large organisation than to start implementing something like a rules engine and then discover halfway through the process that the organisation has six other rules engines hidden away in different groups. This will help you find who is already building what you need.

Also try and compare your map to how competitors treat actions. If you're really lucky your organisation has some form of intelligence unit that keeps tabs on this and they can do it for you. If your organisation doesn't have multiple maps, well ... that's a pity. You can still challenge by asking questions such as how are we treating this, trying to find areas of efficiency etc. Suggest to your executives it might be a good idea for the company to learn about situational awareness.

5) Adjust!
Now adjust your map accordingly. Many of the things you were planning to build are already built by others (the red dots), so use them. Some of the things you were treating inefficiently, so change it. If you're lucky then some of the components you're going to build can also be provided to others e.g. a more commodity form of an existing act.

If unfortunately you don't have maps of other departments / groups or any intelligence function then you won't know what others are really doing other than by going around and asking people. This is incredibly tedious, so try and encourage them to map in future as it'll help find stuff.

6) Think!
Now add a bit of strategy. Since you can see the environment then you might as well take advantage of it e.g. commoditise some things, create ecosystems, exploit inertia or buyer / supplier relationships, anticipate others moves, try to differentiate, there might be unmet needs ... whatever. Lots and lots to choose from (the wheres).

Try and determine why this route over that. Ideally your organisation will have an overall map with the high level strategy defined. Use this. Look at competitors, what their likely moves are, how you can manipulate the market to your favour. Again, if you don't have an intelligence unit or maps then you're a bit stuck on this. Still, it's better to start somewhere.

Tools like SWOT are perfectly useful for examining an individual where however use the maps and all the different points of attack to determine your overall play.

Once you've worked it out, mark up the map accordingly. This intended strategy needs to be shared with all parts of the organisation as others will be able to take advantage of your moves i.e. your move to build a commodity service might enable another part of the organisation to now build something they need.

7) Methods!
Now you have an idea of where you can attack, why you're going to attack one route over another, what the user needs are, what components are involved, what you already have or already use in the organisation (avoiding duplication), likely competitor moves etc then we can get onto the how. Treat the system as small components, don't try and treat it as one thing by applying single size fits all methods. Remember the characteristics of activity (and practice, data and knowledge) change as they evolve and so multiple methods are needed.

8) Simple!
Now, use those FIST principles (Fast, Inexpensive, Simple and Tiny) to break up the system into small teams. Any time a team looks like it's going to get bigger than twelve people then break it down again.

9) Evaluate!
At this point, you've probably done a lot of work ... maybe as much as a whole day (unless you've had to go around different groups finding out what they're doing because no-one has maps). Give yourself some applause.

You'll understand the landscape, you'll know your user need, you've got an idea of where you can attack, some ideas of potential competitor moves, you've determined why one route (or multiple routes) over another, you've even got an idea of how to build the whole thing without duplicating etc. You're in pretty good shape but you're not done.

Now, you need to determine whether this is financially viable and whether we can play this game. Using the map, estimate some of the costs, determine your business model and fill out a business model canvas or something like it. Most of the information you'll need for this (e.g. key activities, partners, resources, value proposition, potential revenue streams etc) you should have already discovered on the journey to it but give yourself another half day.

At this point, I normally have a map of the environment (possible sub maps), a strategy based on the multiples wheres (with often some supporting SWOTs of individual wheres - many of which get rejected), a business model canvas for my plan of attack and a pretty decent idea of how to play the game. 

Now, if for whatever reason the current approach is not financially viable (e.g. some of the components are too expensive) then you can go back and look at the strategy i.e. maybe we should commoditise those components first or save the play for a later day. Do remember that eventually components will commoditise due to supply and demand competition, hence what is not viable today might become viable tomorrow. The map alone will be useful to other groups, so share it with the rest of the company.

Approval (i.e. any need to gamble investment) is hopefully a relatively quick process given it's easy to challenge a map and you have all the supporting information (e.g. BMC, SWOTs, comparison to competitors etc). Most of the time I've found the usual response to be "this is so obvious why is no-one else doing it?" which brings its own challenge. 

At worst, you should find that you're spending two days on this entire process but remember, those maps aren't wasted effort as others can use them. If you're spending more time than this on getting to an approval state then either :-

a) You lack experience in mapping out environments - that's ok, you'll get quicker. Don't try and create perfect maps, there's no such thing.

b) There's a lack of maps elsewhere i.e. you're spending too much time trying to find out what others are doing. This could add weeks to the process in a large organisation and it's highly inefficient. Try and encourage others to map, it'll help you find the duplicates. One of the worst examples I know is of one organisation with over 120 different implementations of the same thing in different groups / functions and departments. Though, for totality of duplication and uselessness, it is beaten by another that has managed now to reduce their application estate from over 8,000 applications to slightly under 700.  Yes, 87% of the estate was pointless. Of course that was obviously great for vendors and sucked for the company. As a rule of thumb, in any large organisation then I'd ask how many instances of this thing do you know of (e.g. the one you're  planning to build and maybe the one other you know about of the top of your head) and whatever you say, I'd multiply by 10 - there's always vastly more duplication than you realise. 

c) You've no intelligence function and so you're trying to learn basic economic lessons, competitor moves etc. Always a nuisance, this can add further weeks to the process and without maps then you have no way of recording what works and what doesn't. If necessary, set yourself up as the "intelligence" unit and get everyone else to send their maps to you. 

d) Determining strategy is hard - this is normal. Most people have no real experience of strategic play having been bought up in a business world which relies on copying others and little to no situational awareness.  It takes quite a bit of practice to get good at this and certainly if you're playing strategic games at a high level (i.e. overall company strategy on a national scale) then this can take a few days to a week (though that's a bit of overkill). I'm afraid it can take you years to get good but don't panic, you're playing usually against others who have little to know situational awareness hence even basic moves like "fool's mate" work. You've got to start practicing and so this is as good a time as any.

e) The approval process is slow. There's not much excuse for this, ask why?

The other line which comes up is it's complex or the line of business is complex. That's almost always complete gibberish and is more a signal that people don't want to think about it. Mapping tends to expose people to hard questions and that's not always popular.

9) JFDI!
Now, once you get approval then you've got everything you need to form your teams, fill them with the right sort of aptitude and attitudes. Get your teams to put up kanban charts and just get on with it. They should all now understand the landscape, what's available, the strategy, what methods are best suited etc etc.

10) Iterate!
Remember keep an eye on the map, watch for competitor moves, make sure teams are kept small etc etc.

--- tools

The mapping technique is Creative Commons 3.0 Share Alike. If you need a tool to help you map, well ...

a) The LEF (Leading Edge Forum) has a free mapping tool available online. It's not open source yet but that'll be announced soon enough.

b) The Wardley Maps Group has an open source tool available. I've no involvement with this group but I'm absolutely delighted they are they doing this and taking the spirit of Creative Commons to heart.

c) Lots of people just use pen and paper, Visio or equivalent.

--- books and variations

There's three books which refer to mapping. The excellent Digitizing Government by Alan Brown, Jerry Fishenden and Mark Thompson. Also Jez Humble includes it in Lean Enterprise which I've not read yet but I'm looking forward to it. 

The WardleyMap crew have also stitched together a community book based upon many of my old blog posts on the subject and there's a community effort to make it better. 

There's also this blog which has mapping scattered throughout it.

There's also variations of the map concepts e.g. 18F has an interesting take in their post on 'How to Use More Open Source in Your Next Federal IT Acquisition'

--- notes

This post is an update on the "basics repeated again". I've added the comparison to other maps more explicitly (this is an incredibly useful first step) and simplified the management of the different methods into one single step.

The accidental consequences of connecting 100 billion+ dumb things together by Simon Wardley

Back at Ignite SF in 2007, I gave a talk covering commoditisation, evolution, 3D printing and the web of things. I finished the talk with a concern that I had repeated before and since. That concern I summarised in this image - the matrix will be built from a lot of basically dumb things.

I'm not a fan of the pantomisation of science and hence I'm uncomfortable with the recent open letter on AI research with celebrity endorsements. This is not how we should be funding investigation into science and the intent of the letter can only be seen as attempting to lobby and influence funding directions. I view it as misguided.

I do share concerns about the future of AI but then I take a different view from most on where it will come from. Neurons whilst complex components (receiving, transmitting and storing signals using a mix of electrical, sound and other transmitters, able to connect to other components, responding to the environment and changes in the environment) are essentially dumb. The "intelligence" is in the interconnections, the constantly adapting matrix of all the signals and signals processing.

My concern is not that we will create "AI" through some concerted effort (I view this as hubris) but instead that we will accidentally build "intelligence" through the simple act of connecting 100 billion+ complex but ultimately dumb components with the ability to receive, transmit, connect and store using a mix of mechanisms and sensors. 

Let me emphasise that point - we are already building the matrix, it's called the Internet of Things. We just don't know it yet. We're not deliberately doing so and when the "intelligence" is discovered then it certainly won't have been created through any planned effort of our own but will have emerged and evolved in the space between devices. It'll simply exist in the interconnectedness of billions upon billions of dumbs things. 

At that point we're going to have to work out how to communicate and reason with it whilst knowing our lives, supply chains and economic systems are all dependant upon the billions of dumbs things that enables it. We simply won't be able to switch it off, separate it or apply "rules" to it etc.

When will it happen? We don't know, it'll need to emerge and evolve first. We don't even know the starting conditions for this nor whether some constraint would prevent it from happening. It could be hundreds of years away.

Does evolution imply competition? Yes. In all likelihood if it does occur then there will need to exist multiple competing forms of simple intelligence in this space to drive its development to the point that we notice its existence.

Can we influence its development if this happens? We're unlikely to notice it until one form becomes advanced enough that we  can discover its influence. All we need to do is connect huge volumes of these dumb things together, write endless non related systems that interconnect them and wait.

Isn't this like monkeys writing Shakespeare? Nope, think more along the lines of swarm intelligence (e.g. from ant colonies to co-operative behaviour in populations of bacteria).

--- 17th Jan 2015

This all depends upon nodes, connectedness and the ability for multiple forms of intelligence to emerge and compete in an environment.

Remember :-
* life itself is fundamentally about maintaining and reducing entropy
* the cycle times between such populations (unlike in biological systems) could be exceedingly short (orders of milliseconds)
* competition will naturally drive to more stable components enabling higher order systems and therefore higher orders of intelligence to emerge
* life and the development of intelligence is itself simply a series of bugs, unforeseen circumstances and accidents interacting with competition.

Out of interest - Google's secretive Omega tech just like LIVING thing. Even a cockroach has a million nodes with a billion interconnections. Then of course, you have to multiply up to create competitive environments. Still, with a 100 billion devices and say a trillion different interconnections then you might find space for a few roundworm like intelligences to emerge. Don't expect to be having startling and witty conversations over dinner with it.

The real question for me is therefore will we create artificial intelligence before it emerges?

Oblivion (The Complete Eighth Doctor Comic Strips Volume Three). by Feeling Listless

Comics The DWM strip made the change to colour superbly well, still able to provide moody imagery when required but also allowing for bright technicolour leading to a change in the shading of the Doctor's jacket from green to blue (which seemed like a huge decision which just shows how conservative the Eighth Doctor era was) and real expression in Izzy/Destrii's gills. This graphic novel also includes the short piece Character Assassination, drawn for a special issue about the Master but since it's about the Delgado model I'm saving that for when I finally try and catch up on my Third Doctor backlog. Yes, I'm the kind of fan who has piles of stories for particular incarnations unread or unheard.


Of everything there is to focus on in the story (the Doctor’s new jacket, Izzy’s new body), I’m slightly obsessed with the TARDIS interior. Although the general look is similar to the TV Movie, the focus is on a library which is I think how it's portrayed in the novels and is much the case in the tv series. There’s also that massively scenic element of opening up the ceiling to reveal the exterior view something which must have been considered for television too but perhaps would never quite look right. Plus there’s the fairy tale element of having the door acting as a portal into another world and has that quality from both directions. It’s a moment which tends to be cut from the strips for reasons of brevity, and sometimes, as here, it’s sadly missed.

Beautiful Freak

Having Izzy change species is really bold storytelling and for the first time in the comics, the story arc is much more about character than story. The Doctor’s big speech about changing shape is just how you would expect him to try to deal with the situation but of course he’s dead wrong and Izzy throws it back at him just as she should. She’s not just isolated from her body now, she’s isolated from her entire life and doesn’t even know how her body reacts to anything, even how to breath. The scenes were she attempts to regain some of her identity by wearing the clothes she originally wore when she joined the TARDIS are meant to seem hopeful, but really they’re chilling. At this point we’re meant to believe Destrii isn’t returning with her real body but even with hindsight, this is extremely effective.

The Way of the Flesh

Four years before The Unquiet Dead, here’s a surprise celebrity historical which also backgrounds mortality albeit in a way which is utilised in a way which is closer to Army of Ghosts in terms of how it effects the local population. In keeping with the television series, we have the idea of Frida Kahlo and Diego Rivera with just enough accuracy for the purposes of the story, the former, thanks to her own condition, a way helping Izzy come to terms with her predicament ready for the next story in which it becomes vitally important. Reading this reminds me to watch the two biopics again at some point, Frida and The Cradle Will Rock, though it’s Salma Hayek and Alfred Molina’s voices from the former which spoke the dialogue in my head when reading this rather than Corina Katt Ayala and Ruben Blades from the latter.

Children of the Revolution

The strip that’s famous enough to inspire a merchandising gift set though kids won’t really be able to do a re-enactment until the inevitable waterproof Kroll hand puppet emerges. Of all these strips, barring the Izzy the Fish business, this is one of the few of these strips that could be filmed as is on television give or take a few budgetary trims. The idea of good Daleks has of course been visited a few times since, notably Into The Dalek, but also in Marjorie Blackman’s novella The Ripple Effect. Unlike pretty much every other occasion, the Doctor doesn’t immediately assume it’s a ruse and doesn’t attempt to reveal their “true” colours ala Victory of the Daleks. The notion of Daleks as religious beings would also later be developed in the Russell T Davies era as well as the notion of the Doctor as “saviour”.

Me and My Shadow

Another “Dalek Cutaway” this time to explain to potential new readers who Fey Truscott-Sade is to new readers as well as explaining to old ones why she hasn’t killed Hitler yet or some such. We’re told and we’ve seen in various narratives that the Time Lords aren’t the only beings in the Whoniverse with access to the technology. Although stories have been told about them and others manipulating history (see the Databank) even in Let's Kill Hitler there’s not much prospect of it. In the final exchange between Fey and Threshold, the latter reminds the former that Adolf is part of a nexus in history which is why she can’t go vigilante on him. But it’s interesting that we’ve never had a story about the Doctor saving Hitler as a young man from some other freelancer in order to save future history.


For all my griping about Capaldi’s callousness last year, the Eighth Doctor, largely due to his multi-media development across many years, could be pretty acerbic himself especially when his friends were in danger. Even knowing whom it’s directed it and why, when he says, “Wake up, Destrii. I care about keeping Izzy’s body intact, the girl currently wearing is can rot” it feels especially dark, especially since John Ross’s artwork magnifies the lines on his face and then cuts to a close-up of her finally appreciating the magnitude of what she’s done.  The story ends on a line which would become famous in the Eleventh Doctor era, the sentiment magpied for The Hungry Earth.  The monsters are scared of him.


Bye then Izzy. Something, now Sinclair’s departure is one of the most heartfelt in the show’s history not and least and also because of the reveal that she’s the first LGBT assistant in the show’s history. In the notes, writer Scott Grey explains that this had been decided right at the start as part of the development of the character but hadn’t really been worked into any of the stories because, as I suspect, how could you do that in the DWM strip without it seeming crass and exploitative? It’s only in her concluding chapter that it begins to make sense and her final kiss with Fey a poignant and logical conclusion to the narrative. Russell T Davies apparently contacted them with his admiration and quite right too. It’s an occasion when Who's comics demonstrate their ability to be as moving as any other artform.

Ten years after the Huygens landing: The story of its images by The Planetary Society

The landing of Huygens on Titan was a significant moment for planetary science and a great accomplishment for Europe. But the Huygens landing also stimulated the development of the international community of amateur image processors that does such great work with space images today. I was in the midst of it all at the European Space Operations Centre in Darmstadt.

Year of the 'Dwarves': Ceres and Pluto Get Their Due by The Planetary Society

This year we achieve the first exploration of these curious but fascinating objects. Paul Schenk explains what we may learn about them.

January 15, 2015

Oscar's Predilections. by Feeling Listless

Film Having been reading Doctor Who comics all morning and shopping at Asda, I entirely forgot about the Oscar nominations. Watching the announcement via YouTube led to an enjoyable moment when a microphone two of the producers of the coverage discussed when to turn off the feed, "We don't want a situation like last year..." Whatever happened then. I've slept and eaten some cheese since then. Either way the only thing anyone is talking about on Twitter is Dick Poop and The Lego Movie.

Best Picture

American Sniper
The Grand Budapest Hotel
The Imitation Game
The Theory of Everything

Of course, though given previous experience, and I do keep saying this, it'll probably go to something relatively conventional with a decent lead performance.

Best Director

Wes Anderson, The Grand Budapest Hotel
Alejandro G. Iñárritu, Birdman
Richard Linklater, Boyhood
Bennett Miller, Foxcatcher
Morten Tyldum, The Imitation Game

But if it wasn't there, Wes Anderson. That goes for Best Picture too. I wouldn't be entirely unhappy if Budapest won both either.

Best Actor

Steve Carell, Foxcatcher
Bradley Cooper, American Sniper
Benedict Cumberbatch, The Imitation Game
Michael Keaton, Birdman
Eddie Redmayne, The Theory of Everything

Because Cumberbatch.

Best Actress

Marion Cotillard, One Day, Two Nights
Felicity Jones, The Theory of Everything
Julianne Moore, Still Alice
Rosamund Pike, Gone Girl
Reese Witherspoon, Wild

Because Jones. Felicity "Chalet Girl" Jones is now an Oscar nominee. Though Pike is a close second.

Best Supporting Actor

Robert Duvall, The Judge
Ethan Hawke, Boyhood
Edward Norton, Birdman
Mark Ruffalo, Foxcatcher
J.K. Simmons, Whiplash

Best Supporting Actress

Patricia Arquette, Boyhood
Laura Dern, Wild
Keira Knightley, The Imitation Game
Emma Stone, Birdman
Meryl Streep, Into the Woods

Although we're reminded by anyone who watches soap operas that actors there have had to sustain characters across many years, the technical feat of doing that convincingly and invisible across over a decade within a few days here and there can't be disregarded. There is an argument that they're playing versions of themselves, but Hawke's characterisation is very distinct when seen in comparison to Jesse.

Best Original Screenplay

Birdman, Alejandro G. Iñárritu, Nicolás Giacobone, Alexander Dinelaris Jr., Armando Bo
Boyhood, Richard Linklater
Foxcatcher, E. Max Frye, Dan Futterman
The Grand Budapest Hotel, Wes Anderson, Hugo Guinness
Nightcrawler, Dan Gilroy

Admittedly to an extent these awards should be handed to Boyhood for the sheer endurance of the effort, but in relation to screenplay, creating a coherent story across that time taking into account changes in casting and location has to be acknowledged.

Best Adapted Screenplay

American Sniper, Jason Hall
The Imitation Game, Graham Moore
Inherent Vice, Paul Thomas Anderson
The Theory of Everything, Anthony McCarten
Whiplash, Damien Chazelle

Shrugs. Haven#t seen any of these. Cumberbatch rule applied.

Best Foreign Language Film

Ida (Poland)
Leviathan (Russia)
Tangerines (Estonia)
Timbuktu (Mauritania)
Wild Tales (Argentina)

Was One Day, Two Nights not submitted? I haven't seen that yet either but it seems like a real Oscar type film. Odd.

Best Documentary Feature

Finding Vivian Maier
Last Days in Vietnam
The Salt of the Earth

That's what I presume will win - its the least controversial of the choices,

Best Animated Feature

Big Hero 6
The Boxtrolls
How to Train Your Dragon 2
Song of the Sea
The Tale of the Princess Kaguya

See here. Other than that I'm applying the MARVEL rule. Other than Boyhood, the most important film of last year was relegated to the craft categories.

Best Original Song

“Everything Is Awesome,” The LEGO Movie
“Glory,” Selma
“Grateful,” Beyond the Lights
“I’m Not Gonna Miss You,” Glen Campbell…I’ll Be Me
“Lost Stars,” Begin Again

Consolation prize. Sniff.

Best Original Score

The Grand Budapest Hotel
The Imitation Game
Mr. Turner
The Theory of Everything

Though having two Desplat nominated scores means it'll go to something else due to split voting.

Best Film Editing

American Sniper
The Grand Budapest Hotel
The Imitation Game

In your face, The Baftas. The multiple aspect ratios of Budapest should give us pause though.

Best Cinematography

The Grand Budapest Hotel
Mr. Turner


Best Costume Design

The Grand Budapest Hotel
Inherent Vice
Into the Woods
Mr. Turner

Because it's the one I've seen. Disney vs Disney here too.

Best Production Design

The Grand Budapest Hotel
The Imitation Game
Into the Woods
Mr. Turner

See above.

Best Makeup and Hairstyling

The Grand Budapest Hotel
Guardians of the Galaxy

Because it's the most important film of last year. Expect Budpest will get it though.

Best Visual Effects

Captain America: The Winter Soldier
Dawn of the Planet of the Apes
Guardians of the Galaxy
X-Men: Days of Future Past

Dawn will probably win it though. At a certain point the Oscars are going to have to start looking at mocapping in general and as an artform. In this category we're comparing the creation of a performance with crashing a spaceship into a city. They're not the same thing.

Best Sound Editing

American Sniper
The Hobbit: The Battle of the Five Armies


Best Sound Mixing

American Sniper

No idea. Should have seen it at the cinema. Didn't. I've left the Short Films off. I know my place.

Let's talk about The Lego Movie and the Oscars. by Feeling Listless


Don't read this if you haven't seen it yet.

Seriously, don't.

Here's what the Oscars says in relation to the eligibility of an animated film:

"An animated feature film is defined as a motion picture with a running time of more than 40 minutes, in which movement and characters’ performances are created using a frame-by-frame technique. Motion capture by itself is not an animation technique. In addition, a significant number of the major characters must be animated, and animation must figure in no less than 75 percent of the picture’s running time."

The running time of the film is 83 minutes.

 My guess is the live action elements towards the end rendered it ineligible for a nomination (we could time those minutes I suppose and see) or failing that Oscar voters assumed it was ineligible and simply didn't vote for it.

Or decided that it was simply an eighty minute long toy commercial and didn't vote for it on that basis.

Simple as that.

Still, it was exciting for the three minutes when we thought it might be up for Best Picture.  I'll post the nominations in a minute or two.

Molecular Gas in Post-Starburst Galaxies by Astrobites

Title:  Discovery of Large Molecular-Gas Reservoirs in Post-Starburst Galaxies
Authors: K. Decker French, Y. Yang, A. Zabludoff, D. Narayanan, Y. Shirley, F. Walter, J. Smith, C. A. Tremonti
First Author’s Institution: Steward Observatory, University of Arizona
Status: Accepted by the Astrophysical Journal

In the 1920s, Edwin Hubble classified galaxies into two types based on their shape: spirals and ellipticals. Spiral galaxies tend to be blue (though not always), and elliptical galaxies tend to be red. When Hubble first constructed his “tuning fork“, he assumed that galaxies started out elliptical and evolved into spirals, so he named these categories “early-type” and “late-type”. It is now suspected that galaxies generally evolve in the opposite direction, from star-forming disk to passive spheroid. This evolution requires the galaxy to stop forming stars out of molecular gas. This cessation of star formation may be caused by removing the gas from the galaxy entirely or by simply stopping it from collapsing to form stars. The authors of today’s paper set out to determine which of these scenarios is operating in post-starburst galaxies.

A post-starburst galaxy is one that has recently undergone an intense bout of star formation, but whose star formation rate is now comparable to an elliptical galaxy. These galaxies are often found in the so-called green valley as they transition from a blue star-forming galaxy to a red early-type. The authors measure the molecular gas content in a sample of nearby post-starburst galaxies, to determine whether they have removed their gas or have simply suppressed star formation.

The authors select post-starburst galaxies from the Sloan Digital Sky Survey by choosing galaxies with both strong stellar Balmer absorption lines and little hydrogen emission from star-forming nebulae. Taken together, these are signs of a stellar population rich in A-stars but devoid of the youngest O and B stars, implying that an intense period of star formation ceased less than a billion years ago.

Using the IRAM 30m and SMT 10m millimeter telescopes, the authors measured the emission from CO in the galaxies to determine molecular gas mass. CO is the 2nd most abundant molecule in galaxies, so to find the total molecular gas mass, the authors use a conversion factor known as the X-factor. Figure 1 shows the distribution of molecular gas masses for the sample of post-starburst galaxies, compared to reference samples of early-type and star-forming galaxies.

Figure 1: Molecular gas content of post-starburst galaxies compared to reference samples of early-type and star-forming galaxies. The dashed histograms with arrows are upper limits. The mass of molecular gas (left) and the mass fraction of molecular gas (right) in the post-starburst galaxies is similar to the star-forming galaxies. The early-type galaxies have much less molecular gas.

The Kennicut-Schmidt relation is a relationship between the surface densities of star formation and molecular gas. The relationship holds for spiral and starburst galaxies, and the authors measure the star formation surface density to find to see whether the same relation holds in post-starburst galaxies.

The star formation rate is measured using two different star formation indicators measured from the SDSS spectra. The first indicator is the Hα luminosity, which traces the most massive and shortest lived stars, providing a good tracer of the instantaneous star formation rate. Unfortunately, the post-starburst galaxies are largely contaminated by LINER emission, which produces extra Hα emission. Thus, the Hα star formation rates are upper limits on the true values. The second star formation indicator used is the 4000 Angstrom break, a sharp jump in the spectrum of a galaxy caused by calcium absorption lines. A stellar population dominated by young stars has a weak 4000 Angstrom break. Because the post-starburst galaxies were still starbursting less than a billion years ago, A-type stars formed during the burst are still around, causing the 4000 Angstrom break to be weaker than predicted by the low present star formation rate. Due to this timescale effect, the star formation rate derived from the 4000 Angstrom break is also an upper limit.

Screen Shot 2015-01-13 at 7.55.18 PM

Figure 2: Relation between star formation rate and molecular gas surface densities. The black triangles are upper limits for post-starburst galaxies. The red circles are spiral galaxies and red squares are starburst galaxies. The two panels use different methods to determine the star formation rate of the post-starbursts.

Shown in Figure 2, the post-starburst galaxies are offset towards lower star formation rates given their molecular gas masses, compared to the Kennicutt-Schmidt relation for star-forming galaxies. The post-starburst sample has a median star formation rate density approximately five times lower than the median of star forming galaxies at the same molecular gas density. Keep in mind the star formation rate densities plotted in Figure 2 are upper limits; the true values may be even lower.

If they still contain lots of molecular gas, why have these galaxies stopped forming stars? The authors conclude that the lack of star formation in post-starburst galaxies is not simply due to removal of molecular gas. They put forward three potential explanations for the offset of the post-starburst galaxies from the Kennicutt-Schmidt relation shown in Figure 2. First, the molecular gas may have a reduced ability to collapse into stars, possible due to gas heating induced by the starburst or an active galactic nucleus. Second, the X-factor may be much lower than in normal star forming galaxies, causing the CO measurement to overestimate the mass of molecular gas. Third, the stellar initial mass function in these galaxies may be bottom heavy, biasing low the star formation rate measurements. Looking to the future, resolved maps of molecular gas and star formation in post-starburst galaxies will provide greater insight into the question of how, and why, galaxies stop forming stars.




Pretty pictures of the Cosmos: Overlooked by The Planetary Society

Astrophotographer Adam Block showcases some stunning images of lesser-known galaxies and nebulae.

January 14, 2015

My Favourite Film of 2013. by Feeling Listless

Film  Gravity was a surprise.

The process of Christmas shopping every year is different.

Some years, like this year, everything seems to fall into place, the perfect items tumble off the shelves and through the letter box (or rather carefully handed over at the doorstep) and there little stress and no worry.

2013 was not like this.  In 2013, I didn't have a clue.

Which is why I ended up on Renshaw Street looking for Christmas presents one afternoon late into 2013.

I didn't find anything and I especially didn't find anything for my parents.

Dejected and doubting myself it didn't take too long to realise that I needed to take a step back.  To breath.

So despite having decided I wouldn't end up seeing Gravity at the cinema, I went to see Gravity at the cinema.

I surprised myself with a film.

In Screen One at Picturehouse at FACT in Liverpool.

Thank goodness.

Sitting on the row was a good choice especially since FACT's screen runs from floor to ceiling and as Sandra Bullock tumbled and rolled through her computer generated space, I felt my stomach repeating the gesture right the way through, my heart pumping, my mouth wide open, forgetting all my Christmas present buying troubles.

Not even the burk directly behind me with the massive bag of crisps who only seemed to munch during the all important moments of silence really distracted me.

I turned around now and then to give him my trademark glare but he was clearly in the group of people who just didn't seem to get Gravity, that its slight characterisation and relatively simple story are a slave to the visual and ride.  He sighed.  He fidgeted.  He continued munching.

But I was with Sandra in her space suit breathing and breathing and breathing some more even when she shouldn't, even when I couldn't.

Then as it reached its conclusion, my arms wrapped around myself, watching one of the most powerful, moving shots in cinema history, I grinned.  As director Alfonso Cuaron's credit appeared I raised my arms in the air.

Presents were bought.  Christmas was fine.  Though I can't credit either to Gravity exactly, I was at least reminded that if you just keep going you'll get there in the end.

Aptivate is hiring! Why don&#39;t you join us? by Aptivate

Are you motivated by using technology to improve people’s lives? Aptivate is hiring an Agile project manager and a Web developer

Developing for Development by Aptivate

What's it like being a developer at Aptivate? I've been a developer at Aptivate for 10 years, and volunteered part-time in my evenings and weekends for 3 years before...

Secular Stagnation: GDP is the Wrong KPI by Albert Wenger

The secular stagnation discussion has a bad title and sounds wonkish but is extremely important — it is about nothing short of understanding what the economy is and should be about. If you want a lot of detail you can read through Marc Andreessen's epic secular stagnation tweet storm and all the great source material that he links to.

At the heart of the secular stagnation discussion is the idea that somehow the economy is growing more slowly than it should. See for instance the following quote from a Larry Summers speech:

[…] through this recovery, we have made no progress in restoring GDP to its potential.

The rest of the speech goes on exploring the reasons for why this might be so with a particular focus on interest rates.

There is, however, another line of inquiry that is much more important and that is to question the very premise here by asking whether GDP makes sense as a measure for the economy going forward.

Let me give an analogy from the business world. Suppose you are newspaper and the KPI you are using to manage your business is your circulation. That works really well for a great many years. Then all of a sudden you are in board meetings that show a lack of circulation growth and possibly even a decline. You keep arguing about the reasons for why your management has “made no progress in restoring circulation to its potential.” The debate rages on all the while you completely ignore that your newspaper also has a website on which traffic has been growing steadily.

That is what is happening with the economy at large. We are de-materializing it and moving from a world of easily measured and priced atoms to a world of bits which are creating massive consumer surplus. So for instance: we stop printing and selling the Encyclopedia Brittanica because everyone is using Wikipedia instead — that shows up as a decrease in GDP even though many more people now have access and get the benefit of knowledge. If we take dozens of different devices that were all sold separately (film camera, video camera, maps, books, photo albums, pencils, tape recorders, etc) and combine them into a single device (the smartphone) that uses primarily software to accomplish these functions that too can easily result in a decrease in GDP. In a talk I am about to give at DLD in Munich I will give even more examples of this but you get the idea.

By the way, GDP was always a suspect KPI for the economy because of the challenge of negative externalities: if we make products that harm people (eg selling cigarettes that cause lung cancer) and then sell other products to correct the harm (eg lung cancer treatment drugs) both of these increase measured GDP. Again, there are tons of other examples like that.

To be clear, there are lots of intelligent discussions to be had about phenomena such as reductions in risk premia, the role of China and NAFTA, the influence of household deleveraging, etc. and their impact on the economy. But until we are willing to ditch GDP as the correct KPI for the economy we will be like the newspaper management team trying desperate measures to fix their circulation problem.

You can see me talk about all of this in a presentation recorded last May at DLD New York. In the coming days I will write more about this topic including the impact of technology driving down pricesChris Dixon tweeted a great chart and you should check out my posts on education and healthcare. Imagine for a moment a computer than can correctly diagnose patients within seconds and then think about the implications for GDP.

I understand that this is a partial equilibrium argument and will expand on how it can be true for the economy as a whole. As part of that I will also suggest that our obsession with full employment — which is also part of the secular stagnation discussion — is also problematic as a KPI. Stay tuned.

Addendum 1: Turns out that Erik Brynjolfsson has given a great talk about the importance of free goods at Techonomy and has a working paper aimed at quantifying these effects (a couple % of measured GDP).

Addendum 2: One other important criticism of GDP as a KPI is that it is silent on distribution. We have seen major changes in that (a) away from labor toward capital and (b) within labor from normal distributions towards power laws (eg CEO comp relative to employees).

Relevance of magnetic dissipation on star formation by Astrobites

Title:  Dissipation of magnetic fields in star-forming clouds with different metallicities
Authors: H. Suma1, K. Doi1, K. Omukai2
Author’s Institution: 1Department of Physics, Konan University, Okamoto, Kobe, Japan and 2 Astronomical Institute, Tohoku University, Aramaki, Sendai, Japan
Status: Accepted by the Astrophysical Journal

What’s the paper about?
Where do the stars come from? I guess most people ask this question at some point in their lives and it is no surprise that a lot of research is going on in that field. Other astrobites already discussed some basics and concepts of star formation. Among others a process called magnetic braking, which plays a crucial role in transferring angular momentum during the collapse phase of the prestellar core. However, there are two main caveats. First, interstellar magnetic fields can only be observed indirectly, which makes it difficult to estimate their strength precisely. Second, simulations are idealized and often neglect mechanisms, which can reduce the strength of the magnetic fields. That’s exactly, where this work comes into play. The authors investigate if and how the magnetic field strength reduces, or as physicists say, dissipates during the initial collapsing phase.

A brief overview of the underlying model

Fig. 1: Illustration of regimes of magnetic dissipation for an ionization rate such as present in the Milky Way (η=1). Each panel corresponds to a different metallicity.

Fig. 1: Illustration of regimes of magnetic dissipation for an ionization rate such as present in the Milky Way. The color scheme reflects the logarithm of the ratio of drift velocity to free-fall velocity of the gas. The four panels from left to right and up top to bottom mark different metallicities of Z/Zsun=1 (panel 1), Z/Zsun=10-3 (panel 2), Z/Zsun=10-6 (panel 3) and Z/Zsun=0 (panel 4). Above the white line ambipolar diffusion dominates over Ohmic dissipation.

Particularly, the authors investigate how the magnetic field strength evolves in a so called one-zone model of spherical core collapse. Expressed in every-day language, this means that for each parameter one average value is assumed for the pre-stellar core, which is modeled by a sphere. After the sphere starts to collapse the parameters at the center evolve and the authors study the parameters’ change. The authors are mainly interested in the dependence of the dissipation on the amount of elements different than hydrogen and helium in the environment (astronomers call this relative abundance ‘metallicity’). The metallicities vary from 0 to solar strength. Moreover, the authors study the collapse for environments of different ionization rates. The ionization rates indicate how likely atoms and molecules become electrically charged. Remember that this is important because magnetic fields are induced by moving electrical charges. The authors assume four different rates corresponding to four different regimes in the universe (primordial pristine environment, first galaxies, Milky Way and distant starburst galaxies). While the primordial pristine environment is basically non-ionized and first galaxies only have 1% of the ionization rate of the Milky Way, distant starburst galaxies are supposed to be ionized ten times stronger than our galaxy.


Now we have an idea of what the model is like, let’s see what the authors get out of it! One often proposed dissipation effect of the magnetic field is ambipolar diffusion, the effect that the ionized particles, which are coupled to the magnetic field, slow down the freely collapsing particles via collisions. With respect to the gravitational collapse, densities in the prestellar core are usually too high and magnetic fields are too low to allow a significant contribution of ambipolar diffusion in the models. Only in the case of low ionization fractions and high metallicities, ambipolar diffusion can play a significant role in dissipating the magnetic field during the collapse phase.

The other often considered process is the transfer from magnetic (or electric energy) into heat called Ohmic dissipation. Indeed, the authors claim that this is the main reason to reduce the magnetic field strength in prestellar cores. Figure 1 illustrates the ratio of the drift velocity of the magnetic field lines to the gas velocity for four different metallicities and the ionization rate of our Galaxy (η=1). In the case of a faster drift than gas velocity the magnetic field dissipates. The white line marks the boundary between the two dissipation mechanisms. Ambipolar diffusion dominates above the line and Ohmic dissipation below. As you can see in the four panels of Fig. 1, there is a range colored in red for densities high enough to collapse between nH=1012 and 1017 cm-3 for solar like metallicities. Note that the range becomes narrower for decreasing metallicities because the charge carriers are available for less metallicity and the gas becomes more resistive. Similarly, assuming a constant metallicity, the magnetic field dissipates for a wider range of density in the case of lower ionization rates.

Fig. 1: Density relation with respect to metallicity for four different ionization levels. Upper and lower points of the same color mark the range, where Ohmic dissipation applies. The shaded region illustrates the range, where protostellar outflows and jets are expected to be launched. As you can see, the launching region is independent of the ionization rate.

Fig. 2: Density relation with respect to metallicity for four different ionization levels. Upper and lower points of the same color mark the range, where Ohmic dissipation applies. The shaded region illustrates the range, where protostellar outflows and jets are expected to be launched. As you can see, the launching region is independent of the ionization rate.

Relevance for the formation and properties of the First Core

So far so good, but what does that mean for star formation? Figure 2 illustrates the correlation of density and metallicity for different ionization levels (different colors). The upper and lower points mark the boundaries of the density range where Ohmic dissipation is active. The shaded region illustrates the density range in which protostellar outflows or jets may occur during the phase of the first hydrostatic core. Let’s look how stars like the Sun could have formed. The shaded region extends to densities lower than the lower bound of the dissipation range (green dots). Hence, the first core already forms and evolves at densities lower than required for dissipation. That means, at the time when the first core forms the magnetic field strength is not reduced. During the collapse the strong magnetic field lines can twist strongly. Subsequently, the twist induces protostellar outflows and jets before Ohmic dissipation had a chance to suppress the launch.


What is most important about the paper? The authors study the evolution and possible dissipation of the magnetic field during pre-stellar collapse for different conditions of the environment. Considering conditions of our galaxy, Ohmic dissipation is expected to occur during gravitational collapse. Does that mean simulations neglecting the dissipation effects are useless? Fortunately not, because at metallicities similar to the one present in the Sun, the first core already forms before dissipation sets in and launching of outflows and jets is still possible. That means that magnetic effects are actually expected to influence the collapsing phase. With respect to the angular momentum problem, it means: magnetic braking can contribute significantly during the initial phase of star formation even when considering magnetic dissipation.


JUICE at Europa by The Planetary Society

Europe's JUICE spacecraft will provide us with a detailed regional study of this icy moon of Jupiter.

January 13, 2015

Woody Allen's signed with Amazon. by Feeling Listless

Film You will have read the news but just to record it here for posterity, Woody Allen's signed with Amazon to write and direct a series for them.

"I don't know how I got into this," says in the press release I received about it. "I have no ideas and I'm not sure where to begin."

Well, quite. It's not his first television work of course. That would be, most prominently and least promisingly, his version of Don't Drink The Water starring himself and Michael J Fox for HBO which I reviewed here. Which isn't especially promising.

Does this mean that for the first time in decades there won't be a new film in 2016, breaking an unbroken run of annual releases since Annie Hall in 1977? Or will he manage to fit this in and a theatrical release?

What form will it take?  Will it be one story across multiple episodes or my guess which is some kind of anthology series based on his previous writing (so as to keep it distinctive from the Whit Stillman series).

Presumably they're just throwing some money at him and seeing what he'll come up with.  Either way, it's deeply exciting.

Soup Safari #14: Boston Chicken Chowder at La Soupe du Jour. by Feeling Listless

Lunch. £3.50 meal deal (including bread with cheese topping and a banana). La Soupe du Jour, 1-3 Dolman's Lane. Warrington, UK. WA1 2ED. Tel: 01925 555 030. Website.

Cloud cuckoo politics by Charlie Stross

Our glorious prime minister, failed TV company marketing director David Cameron, has proposed banning all forms of encryption that can't be broken by the security services. I'm not the only person who thinks this policy is beyond bonkers and well into criminal insanity (even his own deputy prime minister has reservations), but for the record, let me lay out why this is such a bad idea.

0. It is already a criminal offense to refuse to disclose your encryption keys, or to decrypt an encrypted file, on receipt of a lawful order to do so by the police or a court, under powers granted by Part III of the Regulation of Investigatory Powers Act (2000), in force since 2007. (Immediate consequences: paranoid schizophrenic jailed for refusal to decrypt his files. Apparently French anti-terrorism police became suspicious when he ordered a toy rocket motor. Strong encryption is the new tinfoil hat for technically ept paranoids: there's a human rights issue here. But I digress.) The point is, legal powers to essentially compel compliance with Cameron's goal already exist.

1. What Cameron is asking for, however, is a lot more drastic: the outlawing of endpoint-secured communications protocols. In other words, the government must be able to decrypt any encryption session used within the UK. This has drastic consequences which would, in my view, drastically undermine British national security (and cripple our IT industry).

What are these consequences?

2. If the government can decrypt an end-to-end encrypted session, then a third party can in principle use the same mechanism to decrypt it. (The third party could be a rogue government employee, or a crypto hacker.) This is not a hypothetical: it's intrinsic to how cryptography works. It's either secure against all third-party snoopers, or it isn't secure and will be cracked in time inversely proportional to the value of the data conveyed. Also, merely knowing that an encryption protocol has a weakness makes it easier to attack.

What sort of stuff would be at risk of third-party snooping by criminals or random hacker gangs like the denizens of 8chan or Anonymous?

3. Let's start with email. Not just your regular email: how about privileged lawyer/client communications? Internal transmission of confidential medical health records within the NHS backbone network? Your accounts, going to and from your accountant?

4. But email is only the tip of the iceberg. How about the encrypted web session you use to check your bank account? Or to pay your income tax? If you're a small business, the VATMOSS system is obviously a target—and a high value one, where an attacker could steal large amounts of money. Mandatory back doors in encryption imply weakening the security around the government's own tax-raising system. (Talk about sawing off the branch you're sitting on.)

Some systems require end-to-end encryption or they are simply too risky to permit. What are they?

5. Let's start with SCADA systems that control blast furnaces, nuclear reactors, water treatment plants, and factories. Then we can add other online systems: the in-cab signalling system used to deliver signals to drivers of trains on railway lines cleared for high-speed running, traffic signal boards on motorways, and in the not too distant future systems used by air traffic control for filing flight plans and transferring security-related passenger information.

We should then add online finance systems, from Paypal to the APACS credit card settlement system, the BACS payment system through which about 80% of the pay cheques in the UK are sent straight to the recipients' bank accounts, to inter-bank settlement and reconciliation, the share dealing system used by the London Stock Exchange, and every supermarket and wholesale warehouse inventory management and stock control/ordering system in the country.

What is the worst case outcome of mandating that the security around all these systems is weakened?

6. How about a group within 8chan deciding, purely for lulz, to scramble all the patient medical records accessible over the NHS Spine? Or that the Russian Mafia, who are already very much into cybercrime, hit the BACS system and use it to siphon off or scramble all payments going into the HMRC Income Tax accounts on January 31st?

Here's the key message that Cameron simply doesn't understand:

7. There is a trade-off between internal security and external security. You can have perfect security against message traffic between external hostiles if you ban encryption ... but by so doing, you destroy your internal security against attack from any direction at all. Or you can have total internal security with end-to-end encryption of all communications, and be pretty much immune to certain classes of hack attack, but lose the ability to listen for terrorist chatter. These two circumstances are opposite ends on a scale. You can adjust the balance between the two, but mandating either end of the scale is idiotic. Our prime minister has mistaken the rotating knob for a push-button with a binary on/off state. Hopefully his advisors will take him aside over the next few days and teach him better, or he'll lose the election this May. Either way, though, this proposal is disastrous and if it happens, well, I'll just have to get used to being a criminal.

Code Execution: Where Are the Clouds Headed? by Albert Wenger

It is hard for me to believe that I wrote my post “I Want a New Platform" over seven years ago. At the time that post led to our investment in MongoDB, which was then called 10gen. The company was started by Eliot Horowitz and Dwight Merriman inside of AlleyCorp, the New York based incubator started by Dwight and Kevin Ryan (Business Insider and Gilt Groupe also came out of AlleyCorp). Kevin had previously been an investor and board member in my own incubator and had become a friend. When he saw my post he reached out and we wound up investing. Today, MongoDB is the leading database to have emerged out of the NoSQL movement and I am thrilled that the company has just added additional financing.

At the time we first invested not only was the company called 10gen but it was also offering something a bit different: an integrated platform that provided not just a datastore but also a code execution environment. If you go back and read my original post that is what I had envisioned. In fact you could not access the data store by itself without the code execution environment. It turned out though that’s not what developers wanted. They wanted to be able to use their favorite libraries and to configure their own servers. Google App Engine which was first released around the same time as 10gen offered a similar solution and also struggled with developer adoption. In the meantime Amazon Web Services, which offered infrastructure on demand and had launched about a year earlier, took off.

As we are starting 2015 I am revisiting the question of code execution. Why? First, Google App Engine has made great strides. For instance, Snapchat was built and scaled on App Engine. Second, companies such as Docker, Mesosphere and Hashicorp are leveraging a slew of new open source projects and building their own software to dramatically restructure the code execution environment in data centers and on the public cloud. Their goal (maybe over simplified) is to make something akin to Google’s own internal infrastructure — on which App Engine was built — available to anyone.

The big question for developers now (and consequently for investors) is what makes the most sense going forward. One thing I am pretty sure about: managing at the individual server level will finally start to go away. As always I thought this would happen faster, as you can tell from a talk that I gave at Web 2.0 Expo in 2008 (I have to find my slide deck from then and put it online). What will take its place though is less clear.

Do people really still want to run their own infrastructure even at a higher level of abstraction? If you are an existing enterprise or a startup that has really grown the answer is probably yes, but what if you are just getting going?

I am currently working on a blog post about where I think we are headed. I would love to hear from anyone who is either working on this as a software or service provider or has an opinion as a developer.

A Crippled Kepler Discovers a New Planet by Astrobites

Title: Characterizing K2 Planet Discoveries: A Super-Earth Transiting the Bright K-Dwarf HIP 116454
Lead Author: Andrew Vanderburg
First Author’s Institution: Harvard-Smithsonian Center for Astrophysics
Status: Accepted by the Astrophysical Journal


Figure 1. A description of the K2 mission.  The pressure of sunlight will help balance the spacecraft as it looks at different regions of the sky that lie on the ecliptic plane.  It will look at a particular region for about 83 days until the spacecraft must be rotated to a new region to prevent sunlight from entering the telescope.

The Kepler Mission was undoubtably one of the most prolific scientific endeavors in modern astrophysics.  Despite it’s untimely end, this mission has discovered well over half of the known exoplanets to date, found the first Earth-like planets in the habitable zones of their stars, and boosted exoplanet studies to a pinnacle of interest for scientists and the public alike.  As has been discussed in previous Astrobite posts, a year and a half ago second of Kepler’s four reaction wheels failed, rendering it incapable of continuing its primary mission as three working reaction wheels were required to maintain the high precision and stability needed to detect these distant worlds.  However, an endeavor to tweak the mission so it could continue with two broken wheels culminated in the K2 project, which has also been summarized in a previous Astrobite post.  Now Kepler is en route to reanimation, saving it from spending the rest of its days as a half-a-billion dollar piece of space junk.  In a nutshell, Kepler will use the pressure of sunlight as a helping hand – a third reaction wheel of sorts.  It’s viewing will be confined to the ecliptic plane, and it will be looking at different patches of the sky every 80 days or so (figure 1).  This means it will no longer be able to uncover habitable planets around Sun-Like stars (since their orbital periods are far longer than the time Kepler will spend looking for them), but could potentially find similar worlds around cooler M-type stars where the habitable zone is closer in and planets occupying this zone take less time to orbit their star.  Today’s paper presents an exciting discovery and a taste of what’s to come when the K2 mission starts – the first exoplanet discovered by Kepler since the failure of its second reaction wheel in August 2013. 

In February 2014, the Kepler spacecraft spent 9 days observing stars during the Two-Wheeled Concept Engineering Test.  Though this test was aimed at trying out the stunts of the K2 mission and not discovering new exoplanets, when the data from this short testing period was made public scientists began to develop methods of analyzing the data from this altered mission.  One new data analysis issue that arises with the K2 mission is telescope drift.  As the Kepler makes its observations of different patches of the sky, it will slowly drift as the Sun’s radiation pressure pushes against it, and every so often must be set back into position using the spacecraft’s thrusters.  This continual correction is visible in the data; as the telescope drifts the concentration of starlight on the camera’s pixels slightly decreases.  After the data release, a photometric reduction technique was presented to account for this motion of the spacecraft.  This improved the precision of the K2 data by a few factors, making it comparable to the original Kepler mission. 

Screen Shot 2015-01-10 at 3.57.49 PM

Figure 2. The raw Kepler data of HIP 116454 (blue) and the corrected data (yellow). The raw data is offset to provide a better comparison. The repeating dips in brightness in the raw data are caused by the drifting of the telescope from solar radiation pressure.

With the data made public and the aid of a new data analysis technique, the authors of today’s paper visually inspected the corrected light curves and uncovered a single transit event for the star HIP 116454 (figure 2).  Now, what can a single transit tell us about a planet?  The size?  The orbital period?  The temperature?  Not without some more information – without a second transit we cannot determine the exact orbital period, and therefore cannot determine the orbital distance, and therefore cannot tell the size or equilibrium temperature of the planet.  However, the authors were able to set a lower limit on the orbital period of ~5 days, since they would have observed more transits if the orbital period was less that the observation time during the engineering test.  Also, the authors derived an upper limit on the period using the archive of Kepler data and knowledge of the transit duration.  In addition to looking at when planets pass in front of their stars, looking at how long it takes a planet to cross a star can unveil a great deal of information, and has been used to look for unobserved planets in Kepler systems and moons around Kepler planets.  By comparing the transit duration length to other planets in the Kepler catalogue, the authors saw that nearly all of the Kepler planets orbiting similar stars with similar transit durations have orbital periods of less than 20 days, and inferred that HIP 116454’s companion was likely analogous. 

Screen Shot 2015-01-10 at 6.35.40 PM

Figure 3. The K2 light curve overlaying the transit time window that was extrapolated from the radial velocity data (blue shaded region). The dotted line marks when the telescope had to make a major adjustment after 2.5 days of observing, and the study only considered data after this adjustment. The green shaded region marks half a period away from the transit (ie. when the planet would be behind the star), and the lack of a secondary eclipse provides credence for the planetary interpretation of the transit and radial velocity measurements.

Generally, it takes 3 observed transits for a Kepler planet candidate to be “confirmed” (detected with a high enough degree of statistical confidence).  So with the observation of one transit alone, the authors were not able to confirm the existence of HIP 116454’s planet.  To gain confidence in their measurement and affirm that the transit was caused by a planet and not some other celestial object, the authors obtained radial velocity measurements of HIP 116454 using the HARPS-North spectrograph of the Telescopio Nazionale Galileo in the Canary Islands. The radial velocity measurements added credence to the existence of a planet in the right range of orbital periods by detecting a radial velocity curve with a 9.1-day period, and also uncovered tenuous evidence of another planet orbiting HIP 116454 with a 45-day period.  Also, by extrapolating backwards in time the authors were able to correlate the transit event observed by Kepler with the radial velocity measurement to confirm the existence of HIP 116454’s planetary companion, dubbed HIP 116454 b (figure 3).  This planet was found to be a hot super-earth, with a radius ~2.5 times that of the Earth.  With both radial velocity measurements (good for measuring planet mass) and transit measurements (good for measuring planet size), the authors were able to determine the density to help infer the composition of the planet. HIP 116454 b’s density is consistent with that of a low-density solid planet or a planet with a dense core and extended, gaseous envelope. Assuming the former of these cases, this study was able to infer the structure and composition of this super-earth if it is indeed entirely solid (figure 4).  

Screen Shot 2015-01-10 at 6.42.12 PM

Figure 4. Left: A Mass/Radius diagram for sub-Neptune exoplanets. The colored lines infer the composition of these planets if they are indeed entirely solid and do not have a large, gaseous envelope. HIP 116454 b is marked in red. Right: a ternary diagram showing the allowed compositions for solid exoplanets. The thick black line is the allowed composition with the best-fitting mass and radius measurements, and the dashed black line indicates the composition allowed with one-sigma uncertainty.  The arrows point towards higher concentrations of water (blue), iron (red), and enstatite (green), and the colored lines intersect at allowed combinations of these compounds.

Though this planet is just one of many hot super-earths that Kepler has discovered, its discovery has huge implications in the immediate future of exoplanet studies.  K2 will thrive off of planets orbiting nearby bright stars, as they provide an excellent opportunity for follow-up radial velocity observations by ground-based telescopes.  Scientific ingenuity has allowed a crippled satellite to continue its exploration of the universe, and we can rest assured that many more discoveries will follow when the real K2 mission begins this year.  Though Kepler has been down, it is far from out.

Reconstructing What Happened at Sea, as Dragon Arrives at Station by The Planetary Society

Following a routine two-day voyage, SpaceX's Dragon capsule pulled in to port at the International Space Station. Meanwhile, tweets from CEO Elon Musk give clues on what happened at sea.

January 12, 2015

The Company of Friends: Izzy's Story by Feeling Listless

Audio The oddness of suddenly hearing these characters which have only previously appeared on page is entirely mitigated by just how authentically enthusiastic Jemima Rooper sounds as Izzy and how perfectly writer Alan Barnes gauges the script in capturing her pop culture references. Having this audio version comment on its comic origins is sublime as are the satirical swipes against 2000AD and its ilk, in a story which you can still absolutely imagine having appeared early in the comic’s DWM run (I think I may have listened to it slightly “out of sequence” but with no standard placement and multiple choices online I just ended up picking somewhere at random). Funny also how its conclusion manages to sound like parts of Gallifrey Base commenting on a female Master just a few years later.

The Glorious Dead (The Complete Eighth Doctor Comic Strips Volume Two). by Feeling Listless

Comics If all of the Eighth Doctor's multi-medium appearances, mimicked the production method of the televisions series in some respect with book editors and audio producers working in a similar way to showrunners, the comics also had just one or two writers across their entire run, Alan Barnes and Scott Gray, developing a unifying style and a very specific idea of how this version of the Doctor and his adventures should be written whilst simultaneously being just as flexible and experimental in the media within which they’re working.  Having jumped in originally halfway through the material in this collection, it's quite the experience to finally read the stories leading up to The Glorious Dead and beyond and understanding what all of those continuity references actually meant.

Happy Deathday

There’s an argument that Doctor Who’s rarely better than when it’s taking the piss out of itself but as writer Scott Gray suggests in his notes this was the main strip’s first attempt at out and out comedy. Predictably, it’s hilarious. As he says, after the TV movie had “flatlined” they were looking for any opportunity to celebrate an anniversary and this was the 35th and so we have eight Doctors and hundreds of monsters appearing across eight pages spoofing both The Five Doctors and Dimensions in Time (and arguably and spectacularly Terrance’s recently published novel The Eighth Doctors). There is a conversation between 4th and 7th about Time Lord allergies which is probably one of the best pieces of dialogue in the franchise’s history and an example of just the sort of fun you can have with this character.

The Fallen

Grace! Having waited a couple of years to bed in, the strip finally does the direct sequel to the TV movie on but harshes the mellow of the reunion by all but removing the romance from the relationship and in a way which doesn’t quite feel in character for Dr Holloway – though its fair to say that its probably impossible to say properly what is in character considering her limited screen time. Nevertheless it’s easy to admire how Scott Grey is able to turn the odder elements of the TV movie, 8th’s prophesies, the half-human thing to the advantage of the story and how conservative it is that despite the strip has access to the character (something neither the novels or audios have) they don’t use her as the companion. The strip wants to be its own thing so Izzy remains.

Unnatural Born Killers

Kroton! This short piece by Adrian Salmon acts as a kind of “Dalek Cutaway” for future attractions. The graphic novel helpfully reprints the cyberman with a souls first appearances in the then Doctor Who Weekly in which he first discovers he’s different and begins to make human friends and the Salmon strip somewhat continues his story to show that he’s begun to take the fight to races, in this case the Sontarans, who’re threatening what he knows to be his former people. The art is dynamic with great use of blacks to produce something distinctive to the rest of the strips. Of course the notion of expected antagonists, what are usually called monsters, becoming the Doctors and allies and friends would return in the new series with the Paternoster Gang if lacking the melancholia of this man caught between races and worlds.

The Road To Hell

A full on Japangasm with the sort of dense storytelling and heavy exposition you can usually expect from manga. With massive visuals and splash pages and acres of philosophy and alien mythology it’s sometimes difficult to quite follow. It’s noticeable the extent to which Izzy is being utilised to drive the narrative as much as the Doctor and across all of these strips is rarely damselled. Katsuri will clearly return (his condition also looks towards the future treatment of Captain Jack). Even after having been saved he’s clearly unhappy that it means he’ll have to walk in eternity, his honour devalued. As we’ve already seen, the sense of who’s an antagonist/protagonist in these comics is especially fluid. The Doctor had better watch out.

TV Action

Good old TV Action. Of all the Eighth Doctor strips, this is one of those which is often talked about in hushed tones, I believe, the nod of recognition that people were there. I was. I began reading DWM some time during The Road To Hell, perhaps episode three, so felt like a bit of charlatan when suddenly handed a celebration for all the issues I hadn’t read – until I realised I been bought that very first issues. This now doubly nostalgic trip to BBC Television Centre in 1979 is fairly notable for the extent to which it dodges the yewtree in a way that even television documentaries from the late 90s singularly failed to and rendered themselves impossible to rebroadcast. Oh and the Tom Baker bit which is one of the funniest things I’ve read so far on these travels, the quotes all from actual interviews, the total legend.

The Company of Thieves

Kroton! Again! Designed to insert the Cyberman into the TARDIS crew, his first meeting with the Doctor’s noteworthy because it’s a rare cliffhanger in which our sympathy is with an attackee. The Time Lord “kills” Kroton first and so successfully has Scott Grey’s writing made us careful him that see him felled is genuinely worrying, notably also because we know how the Doctor will feel when he realises his mistake. As 8th tussles with pirates over the most powerful weapon in the galaxy, it’s also the first time I’ve really heard Paul McGann convincingly saying his lines and there’s even a moment with which I’m sure I’ve heard repeated between him and Charley as he worries about the TARDIS’s navigation, whispering so as not to heart the time capsule’s feelings.  He’s always doing that, isn't he?

The Glorious Dead

Yes, a single paragraph is entirely inadequate. Once again it’s impossible not to see resonance with later escapades, notably the conclusion to Capaldi’s first series with its weaponising of death and a emotional Cyberperson sacrificing himself to thwart the Master’s plans (see also the clever misdirection the Time antagonist’s identity). The concept of the Omniversal spectrum just confirms my expectation that all of fiction and reality is Doctor Who (also neatly explaining the TV Action business) and the resulting Peanuts parody is inspired. It’s the first occasion when I’ve cheered on turning the final page and the whole thing took an hour to read, much longer than usual. Only really marred by the slightly odd moment early on when 8th describes Izzy as a “blushing beauty” during some introductions. Oh dear, Doctor.

The Autonomy Bug

For the final strip in black and white, Roger Langridge returns with some absolutely gorgeous cartoon visuals which smuggle a very dark tale about what constitutes identity and whether anyone one person has the right to dictate how another group should live. The whole thing looks like and is structured in the style of a Doctor Who Adventures strip but the storytelling is much more mature (though its true some DWA can be deceptively so too). Scott Grey suggests Cuckoo’s Nest as an influence in his notes but there are moments that recall Who Framed Roger Rabbit in the way that characters who could be simplistic gain depth through action and exposition (see also the novels The Crooked World by Steve Lyons and Paul Magrs’s Sick Building). A really impressive end to the collection.

Decoding the airspeed probe with a wheatstone bridge by Goatchurch

By not including a datasheet with their airspeed probe, Brauninger/Flytek gave me the pleasure of two successful days of hacking involving an oscilloscope and much experimentation to work out its parameters and build a circuit to exploit them.

I bought this thing as an optional add-on to the Flytec 6030 (which I’ve never got to grips with) back when I had more money than sense. I wouldn’t have got it for the purpose of reverse engineering like this because I couldn’t do electronics then, and anyway I’d have rated the chances of success as quite low.

Nevertheless, by applying various voltages and different directions and blowing on the propeller to get a response, I established that if you apply a positive current on the tip of about 1Volt (and ground the other connection), the device exhibits a resistance of between 11200 Ohms and 12000 Ohms, depending on the position of the blade.

This was a job for a Wheatstone bridge:

You can actually see the voltage differences (in millivolts) over 1/12 of a turn of the propeller:

At about 1Volt across the terminals you get this half-sin signal as each blade passes the stem:

If you put too many volts across it, there’s a problematic wobble on the floor voltage:

The reason the wobble is a problem is that I then pass the signal through an INA125 OpAmp using the wiring diagram copied from here (because so little of the datasheet made sense), and these irregularities would get stretched up to make an ugly indeterminate signal.

But if you get the volts right, then it produces these beautiful LED illuminating square waves shown in the video below (listen for my blowing on the prop above the noise of the office):

And finally there’s the finished circuit that can act as input to the IRQ pin of an arduino so it can easily count the number of pulses per second:

Now all that remains is to find a way to calibrate it.

This is just another tiny step on the way to building my portable high frequency logging device for the discovery of things.

If you’re doing this for yourself, it might be worth looking at the weatherflow device which works with a smartphone.

More notably in the I-simply-can’t-keep-up department filed under discoveries-that-I-made-30-seconds-ago, there’s also the AS Sensor for XC Soar android app, which first requires an adapter plug to route the signal into the microphone ring of the phone where it presumably analyses the pulses using the audio circuitry!

So much for doing something new.

As you can see, I’m going down this micro-controller route rather than attempting to master the awesome-does-everything Android operating system and all its amazing sensors. I wonder if this is an error. Who knows?

In such an interconnected world, I think it’s necessary to invert the question, and assume that you can never trust an idea until someone else has had it first.

In which case for the idea I’m labouring under, I have to thank Mr Untuckable who posted on 21 April 2008 in a discussion about thermal rotation:

A track logging feature ? Show if the airspeed is greater than the circling track speed…

And that could be made possible by the new Adafruit Ultimate GPS (on order) which can read absolute position at the rate of 10 times a second and send it down a serial line.

January 11, 2015

The teenage version of me just screamed. by Feeling Listless

Music Old friend Matthew Rudd is now a DJ on Absolute 80s and presents Forgotten 80s and whenever he plays a Debbie Gibson tune he tend to let me know as he did this afternoon:

To which I replied:

To which Debbie Gibson herself replied:

Oh who am I kidding. I screamed, the version of me now. I expect I know what I'll be listening to for the next week.


Another Ten Links and a Video. by Feeling Listless

How Wes Anderson’s Cinematographer Shot These 9 Great Scenes:
"There are few directors with a visual style as distinctive as Wes Anderson's, and to find out just what goes into his carefully composed shots, you'll want to talk to Robert Yeoman. The 63-year-old cinematographer has shot every one of Anderson's films (save for the stop-motion Fantastic Mr. Fox); though, astoundingly, he's never been nominated for an Academy Award. Still, with The Grand Budapest Hotel in the hunt for multiple Oscar nods next week, what better time to talk to Yeoman about his storied career, using nine of Anderson's most famous scenes and shots as prompts?"

Why Pygmies Aren't Scared By The 'Psycho' Theme:
"In many ways, music and emotion almost seem interchangeable. Try listening to the Star Wars' Cantina Band song without smiling, or to the Psycho soundtrack without feeling a little tense. But what if you had never heard Western music before. Would these songs still make you feel the same way?"

Can’t sleep? How to beat insomnia:
"I don’t remember having trouble sleeping – until my late teens. There was no grand trauma, no “aha” moment to pinpoint when my sleep was disrupted. I just sort of drifted into insomnia. And there I have stayed, on and off, for almost 15 years. It has meant exhausted days and nights stretched out in front of me like the Grand Canyon. I have tried to remedy it over the years, using pills (soft herbal brands and the hard big pharma types), sprays (top picks: lavender and frankincense), a variety of “calming” sounds (including whale, panpipes and white noise) and, of course, the gold-level option of “wishing really hard”."

David Sedaris gives evidence to the Communities and Local Government Committee:
"On 6 January 2015, MPs on the Communities and Local Government Committee took evidence on litter. It was the committee's second evidence session for its inquiry on litter. The inquiry is looking at how significant a problem littering and fly-tipping is, and whether current government policies are adequate, and give local authorities enough autonomy to tackle the problem." [I also posted this to Metafilter.]

A Look at Battersea Park Station:
"At first sight, Battersea Park station appears to be a complete contradiction. It is not exactly pretty at platform level, but has a splendid façade and booking office. It has five platforms, but only one that is really both fit for purpose and useful. It is surprisingly busy despite the lack of any obvious reason for the high demand. It will also likely get much busier when construction begins on the Battersea Power station development or the Northern Line Extension."

An Open Letter to Men On the Subway During Rush Hour:
"I know you like to spread your chests wide, inhaling deeply and filling your lungs with that special patriarchal air that is your birthright. I know you need to place your legs in wide stances to give ample room to your massive testicles, which you have inherited after generations of Darwinism have assured only the largest and best scrotum survive. I know you need to mount your body against the entire center subway pole, claiming your land like Columbus. I get that."

Women, Minority First-Time Directors Face Tough Time Getting Into TV Biz: DGA Study:
"Women and minority episodic TV directors face a “significant hiring disadvantage” getting into the business because of their gender and race, according to a new five-year study by the Directors Guild of America. The report found that only 18% of all first-timers are female, and that only 13% are minorities."

Navigating The Social Mob Of Mistaken Identities On Twitter:
"In May 2010 I joined a startup called Aviary. My first boss was one of the founders of Aviary named Michael S Galpert. Michael’s initials are MSG and as an early tech adopter, he managed to snag @msg when Twitter came out. MSG is also short for Madison Square Garden, and the Twitter mentions go a little nuts when the Knicks’ and Rangers’ seasons pick up. Michael has fun with it, using a hashtag of #wrongmsg when retweeting some of the funny mistakes. @msg was my first understanding of mistaken identity on Twitter."

Great Museums Television:
"GREAT MUSEUMS is an award-winning documentary television series celebrating the world of museums. The series airs coast to coast on public television stations. GREAT MUSEUMS opens the doors of the museum world to millions of Americans through public television, new media and community outreach with the goal of "curating a community of learners." Executive produced by Marc Doyle and Chesney Blankenstein Doyle, the compelling educational series has won more than forty television awards for excellence, including multiple Cine Golden Eagles, Telly Awards, and Aurora Excellence Awards." [YouTube Channel]

The Code: A Declassified and Unbelievable Hostage Rescue Story:
"Colonel Jose Espejo was a man with a problem. As the Colombian army’s communications expert watched the grainy video again, he saw kidnapped soldiers chained up inside barbed-wire pens in a hostage camp deep in the jungle, guarded by armed FARC guerillas. Some had been hostages for more than 10 years, and many suffered from a grim, flesh-eating disease caused by insect bites."

Tracking a Ghost Mission 238 Million Km Away by The Planetary Society

Daniel Scuka describes the impending demise of the Venus Express spacecraft.

Dragon Reaches Orbit, but Falcon Stage Crashes on Recovery Ship by The Planetary Society

SpaceX’s ambitious attempt to land the first stage of its Falcon 9 rocket on an autonomous ocean platform was "close, but no cigar."

January 10, 2015

Their names fit their jobs #4 by Feeling Listless

The God Login by Jeff Atwood

I graduated with a Computer Science minor from the University of Virginia in 1992. The reason it's a minor and not a major is because to major in CS at UVa you had to go through the Engineering School, and I was absolutely not cut out for that kind of hardcore math and physics, to put it mildly. The beauty of a minor was that I could cherry pick all the cool CS classes and skip everything else.

One of my favorite classes, the one I remember the most, was Algorithms. I always told people my Algorithms class was the one part of my college education that influenced me most as a programmer. I wasn't sure exactly why, but a few years ago I had a hunch so I looked up a certain CV and realized that Randy Pausch – yes, the Last Lecture Randy Pausch – taught that class. The timing is perfect: University of Virginia, Fall 1991, CS461 Analysis of Algorithms, 50 students.

I was one of them.

No wonder I was so impressed. Pausch was an incredible, charismatic teacher, a testament to the old adage that your should choose your teacher first and the class material second, if you bother to at all. It's so true.

In this case, the combination of great teacher and great topic was extra potent, as algorithms are central to what programmers do. Not that we invent new algorithms, but we need to understand the code that's out there, grok why it tends to be fast or slow due to the tradeoffs chosen, and choose the correct algorithms for what we're doing. That's essential.

And one of the coolest things Mr. Pausch ever taught me was to ask this question:

What's the God algorithm for this?

Well, when sorting a list, obviously God wouldn't bother with a stupid Bubble Sort or Quick Sort or Shell Sort like us mere mortals, God would just immediately place the items in the correct order. Bam. One step. The ultimate lower bound on computation, O(1). Not just fixed time, either, but literally one instantaneous step, because you're freakin' God.

This kind of blew my mind at the time.

I always suspected that programmers became programmers because they got to play God with the little universe boxes on their desks. Randy Pausch took that conceit and turned it into a really useful way of setting boundaries and asking yourself hard questions about what you're doing and why.

So when we set out to build a login dialog for Discourse, I went back to what I learned in my Algorithms class and asked myself:

How would God build this login dialog?

And the answer is, of course, God wouldn't bother to build a login dialog at all. Every user would already be logged into GodApp the second they loaded the page because God knows who they are. Authoritatively, even.

This is obviously impossible for us, because God isn't one of our investors.

But.. how close can we get to the perfect godlike login experience in Discourse? That's a noble and worthy goal.

Wasn't it Bill Gates who once asked why the hell every programmer was writing the same File Open dialogs over and over? It sure feels that way for login dialogs. I've been saying for a long time that the best login is no login at all and I'm a staunch supporter of logging in with your Internet Driver's license whenever possible. So we absolutely support that, if you've configured it.

But today I want to focus on the core, basic login experience: user and password. That's the default until you configure up the other methods of login.

A login form with two fields, two buttons, and a link on it seems simple, right? Bog standard. It is, until you consider all the ways the simple act of logging in with those two fields can go wrong for the user. Let's think.

Let the user enter an email to log in

The critical fault of OpenID, as much as I liked it as an early login solution, was its assumption that users could accept an URL as their "identity". This is flat out crazy, and in the long run this central flawed assumption in OpenID broke it as a future standard.

User identity is always email, plain and simple. What happens when you forget your password? You get an email, right? Thus, email is your identity. Some people even propose using email as the only login method.

It's fine to have a username, of course, but always let users log in with either their username or their email address. Because I can tell you with 100% certainty that when those users forget their password, and they will, all the time, they'll need that email anyway to get a password reset. Email and password are strongly related concepts and they belong together. Always!

(And a fie upon services that don't allow me to use my email as a username or login. I'm looking at you, Comixology.)

Tell the user when their email doesn't exist

OK, so we know that email is de-facto identity for most people, and this is a logical and necessary state of affairs. But which of my 10 email addresses did I use to log into your site?

This was the source of a long discussion at Discourse about whether it made sense to reveal to the user, when they enter an email address in the "forgot password" form, whether we have that email address on file. On many websites, here's the sort of message you'll see after entering an email address in the forgot password form:

If an account matches, you should receive an email with instructions on how to reset your password shortly.

Note the coy "if" there, which is a hedge against all the security implications of revealing whether a given email address exists on the site just by typing it into the forgot password form.

We're deadly serious about picking safe defaults for Discourse, so out of the box you won't get exploited or abused or overrun with spammers. But after experiencing the real world "which email did we use here again?" login state on dozens of Discourse instances ourselves, we realized that, in this specific case, being user friendly is way more important than being secure.

The new default is to let people know when they've entered an email we don't recognize in the forgot password form. This will save their sanity, and yours. You can turn on the extra security of being coy about this, if you need it, via a site setting.

Let the user switch between Log In and Sign Up any time

Many websites have started to show login and signup buttons side by side. This perplexed me; aren't the acts of logging in and signing up very different things?

Well, from the user's perspective, they don't appear to be. This Verge login dialog illustrates just how close the sign up and log in forms really are. Check out this animated GIF of it in action.

We've acknowledged that similarity by having either form accessible at any time from the two buttons at the bottom of the form, as a toggle:

And both can be kicked off directly from any page via the Sign Up and Log In buttons at the top right:

Pick common words

That's the problem with language, we have so many words for these concepts:

  • Sign In
  • Log In
  • Sign Up
  • Register
  • Join <site>
  • Create Account
  • Get Started
  • Subscribe

Which are the "right" ones? User research data isn't conclusive.

I tend to favor the shorter versions when possible, mostly because I'm a fan of the whole brevity thing, but there are valid cases to be made for each depending on the circumstances and user preferences.

Sign In may be slightly more common, though Log In has some nautical and historical computing basis that makes it worthy:

A couple of years ago I did a survey of top websites in the US and UK and whether they used “sign in”, “log in”, “login”, “log on”, or some other variant. The answer at the time seemed to be that if you combined “log in” and “login”, it exceeded “sign in”, but not by much. I’ve also noticed that the trend toward “sign in” is increasing, especially with the most popular services. Facebook seems to be a “log in” hold-out.

Work with browser password managers

Every login dialog you create should be tested to work with the default password managers in …

At an absolute minimum. Upon subsequent logins in that browser, you should see the username and password automatically autofilled.

Users rely on these default password managers built into the browsers they use, and any proper modern login form should respect that, and be designed sensibly, e.g. the password field should have type="password" in the HTML and a name that's readily identifable as a password entry field.

There's also LastPass and so forth, but I generally assume if the login dialog works with the built in browser password managers, it will work with third party utilities, too.

Handle common user mistakes

Oops, the user is typing their password with caps lock on? You should let them know about that.

Oops, the user entered their email as instead of Or instead of You should either fix typos in common email domains for them, or let them know about that.

(I'm also a big fan of native browser "reveal password" support for the password field, so the user can verify that she typed in or autofilled the password she expects. Only Internet Explorer and I think Safari offer this, but all browsers should.)

Help users choose better passwords

There are many schools of thought on forcing helping users choose passwords that aren't unspeakably awful, e.g. password123 and iloveyou and so on.

There's the common password strength meter, which updates in real time as you type in the password field.

It's clever idea, but it gets awful preachy for my tastes on some sites. The implementation also leaves a lot to be desired, as it's left up to the whims of the site owner to decide what password strength means. One site's "good" is another site's "get outta here with that Fisher-Price toy password". It's frustrating.

So, with Discourse, rather than all that, I decided we'd default on a solid absolute minimum password length of 8 characters, and then verify the password to make sure it is not one of the 10,000 most common known passwords by checking its hash.

Don't forget the keyboard

I feel like keyboard users are a dying breed at this point, but for those of us that, when presented with a login dialog, like to rapidly type, tab, p4$$w0rd, enter

please verify that this works as it should. Tab order, enter to submit, etcetera.

Rate limit all the things

You should be rate limiting everything users can do, everywhere, and that's especially true of the login dialog.

If someone forgets their password and makes 3 attempts to log in, or issues 3 forgot password requests, that's probably OK. But if someone makes a thousand attempts to log in, or issues a thousand forgot password requests, that's a little weird. Why, I might even venture to guess they're possibly … not human.

You can do fancy stuff like temporarily disable accounts or start showing a CAPTCHA if there are too many failed login attempts, but this can easily become a griefing vector, so be careful.

I think a nice middle ground is to insert standard pauses of moderately increasing size after repeated sequential failures or repeated sequential forgot password requests from the same IP address. So that's what we do.

Stuff I forgot

I tried to remember everything we went through when we were building our ideal login dialog for Discourse, but I'm sure I forgot something, or could have been more thorough. Remember, Discourse is 100% open source and by definition a work in progress – so as my friend Miguel de Icaza likes to say, when it breaks, you get to keep both halves. Feel free to test out our implementation and give us your feedback in the comments, or point to other examples of great login experiences, or cite other helpful advice.

Logging in involves a simple form with two fields, a link, and two buttons. And yet, after reading all this, I'm sure you'll agree that it's deceptively complex. Your best course of action is not to build a login dialog at all, but instead rely on authentication from an outside source whenever you can.

Like, say, God.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!

Astrobites at AAS 225: Day 4 by Astrobites

Thursday was the fourth and final day of the American Astronomical Society winter meeting.

Catch up on Day 1, Day 2, and Day 3.

Many attendees opted to sleep in a bit after about 1000 of us took over a local bar on Wednesday night for the annual unofficial party. Sleepy astronomers aside, there was plenty to see and do on this last day.


Press Conference: Black Holes & Binary Stars

One highlight from the morning press conference was a baffling radio pulsar. These incredibly dense neutron stars appear to “pulse” as they rotate, occasionally sending powerful jets toward our line of sight. So it was a mystery when one such pulsar disappeared without a trace, and reappeared without warning years later. Joeri van Leeuwen shared this mystery with us, and then revealed the explanation. The system is actually composed of two neutron stars orbiting each other, and the gravity is so strong that space is distorted enough for the pulsar’s beam to precess away from our view.


Plenary Talk: Alma Presents a Transformational View of the Universe

Al Wootten, NRAO Scientist and UVa Professor, updated the attendees about ALMA, the Atacama Large Millimeter/Submillimeter Array. He began with a historical overview, followed by a description of the instrument and the driving science goals.

In the remainder of the talk, Al presented highlights from results on distant galaxies ([C II] in z~6 quasars, lensed galaxies, gas outflows from the centers of galaxies), disks in young stars (HL Tau, gas flows across gaps in disks, debris disks), and stellar evolution (Carbon stars, SN1987A) and the solar system (refining the orbit of Pluto for the New Horizons spacecraft, observing the asteroid Juno spin). He concluded by encouraging anyone interested to download and check out ALMA’s science verification images.


Press Conference: Predictions & Probabilities

This session was a bit of a “potpourri,” as it was described by AAS Press Officer Rick Fienberg. Dayton Jones kicked things off by explaining how the Cassini spacecraft and the Very Long Baseline Array (VLBA) here on Earth have measured the location and orbit of Saturn to an accuracy of one mile (at Saturn’s distance from Earth). Measuring better orbits for the planets is used for all kinds of topics in astronomy, including dynamical models of the Solar System, predictions of eclipses, and interplanetary spacecraft navigation. The Juno spacecraft will perform the same measurements on Jupiter’s orbit after it reaches Jupiter in mid-2016.

Next, Adam Miller talked about the challenges astronomy faces now that we’re being flooded with huge amounts of data. He described a project at NASA’s Jet Propulsion Laboratory that uses machine learning algorithms to process the light curve data from over 50 million variable stars coming out of LSST. These algorithms allow us to construct extremely detailed maps of the Milky Way.

Brice Ménard described a gap in our understanding the of the Galaxy. We have made maps of the distribution of atoms, small molecules, dust, and stars in the Galaxy, but it’s hard to detect the large molecules. He then showed a new 3D map of the large molecules in the Galaxy, created by analyzing stellar spectra from the Sloan Digital Sky Survey.

Finally, Robert Nemiroff talked about a fun concept called a “photonic boom.” He described sweeping a laser beam across the surface of the Moon. Due to the finite speed of light and some interesting geometric complications, the spot of light on the Moon’s surface will look like it splits into a pair of spots, moving in opposite directions. Aside from being an interesting thought experiment, this spot-pair creation might be observable in other astronomical systems, and could be used to measure quantities like the orientation of a surface to the Earth, and could lead to another way of measuring distances.

Hack Day

Throughout the day Thursday, several attendees gathered in an out-of-the-way ballroom for a so-called “hack day.” The idea is to pool our skills to work on a project that can largely be completed in one day. At the start of the day, hack ideas were pitched to the room, and participants self-organized into teams. The day was graciously sponsored by LSST and Northrup Grumman, who provided lunch and snacks. Astrobites documented many of these hacks in-progress on twitter, one of which you can experience firsthand once it is finalized in the coming weeks: a revamped astrobites website.



Thanks for following along this week while we brought you news from the 225th AAS  Meeting! We’ll return you to your regularly scheduled astrobites posts next week. There should be a lot of papers showing up on astro-ph now that the meeting is over, so stay tuned!

Getting to know the Planetary Society staff by The Planetary Society

Working for The Planetary Society is an extraordinary job—we deal with extraordinary subject matter, we have an extraordinary mission, we work with extraordinary people, and we work for our extraordinary members and supporters. Jennifer Vaughn introduces some of the new staff here.

NASA Completes First Test Firing of SLS Core Stage Engine (Updated) by The Planetary Society

NASA completed a 500-second test firing of the RS-25 engine, which will power the core stage of the Space Launch System.

January 09, 2015

On services ... by Simon Wardley

When mapping a complex environment, the connections between components (whether activities, practices or data) are interfaces. Those interfaces maybe provided through programmatic means (e.g. APIs), user interfaces (i.e. a webpage) or other means (e.g. physical interfaces). 

However, those interfaces evolve as the component evolves. The interface starts as highly unstable (e.g. constantly changing) due to the chaotic and changing nature of the component. It's novel, it's new, we are operating in the uncharted space of the uncertain. Over time as the component becomes well understood then the interface becomes more stable, even giving an impression of order as the component becomes more of a commodity. For example, a plug and socket are well understood, well defined interfaces which mask the complexity of generating electricity from your average user.

So whenever you map out a complex system or business, you not only have evolving components but the interfaces have different levels of stability (see figure 1).

Figure 1 - Interfaces in a map.

When a component is consumed by others within a more complex systems (e.g. electricity consumed by a computer) then there is a cost associated with any change of the interface or the operation of the component. We experience this (in a trivial sense) when we visit a different country with a different regime of power supply / power standards and we need to take adaptors to cope with the change.

With a large volumes of users consuming an interface in a plethora of complex systems then the total cost of change can be significant. For example, try changing national power supply standards or currency (such as decimilisation in the UK) or the switch to digital TV. Users will tend to collectively resist such a change unless overwhelming reasons are given.

This resistance tends to stabilise the interface and discourage the provider from changing it. This is why your electricity provider can't decide to change the interfaces (e.g. frequency, voltage range) every week and if they attempted to there would be a huge public outcry. This is also why volume operations based services such as cloud services are really only suitable for when the activity has become a commodity and the interface can become relatively stable in a short period of time.

This doesn't mean that you can't provide your brand new "never invented before" concept through an external API to others. However given that the interface is going to change an awful lot in the beginning as we explore what the new concept is, this will limit the volume of users you can support i.e. for some brand new concept then we're talking very few consumers. If you don't believe me, then when you next invent some brand new (never invented by anyone else before) activity then try providing it as a utility service and start changing the interfaces weekly as you discover more about it. The one thing you're not going to build is a volume based service with a large volumes of users. It's simply not ready for it.

Volume based utility services provided through APIs are only suitable for well understood and well defined activities - things like provision of computing infrastructure, CRM, databases, platforms, specific widely used applications etc. These have all had decades to become widespread and well defined enough to be suitable.

Now, when you look at an interface, it simply provides a mechanism of meeting the need for a higher order component (remember these Wardley maps are based upon a chain of needs vs evolution). Hence, from figure 2, the user need is met by component A. However component A needs both component B & C and so forth.

Figure 2 - User Needs and Interfaces

Remember sometimes those interfaces can be provided by APIs, sometimes they are physical interface (e.g. plug in socket), sometimes a user interface etc. Now, in the above diagram, let us consider three user needs - represented by components A, B and D - that we wish to provide as external APIs to others. 

Component D represents a highly evolved commodity-like activity suitable for volume operations where the interface should be relatively stable (an example would be infrastructure such as EC2). These can be provided as utility services and will normally end up over time having multiple providers or ways of implementing.

Component B represents an activity which is in the product phase of competition (i.e. it's still evolving). Hence there will be lots of competition between alternatives on feature differentiation and as a result change. It could be provided through an external API and is more likely to be a very "product" specific rental service rather than generic utility service.  There is usually only one provider for each product type, little duplication of interfaces and no guarantee that your product vendor will end up being the long term utility provider. You can usually see this difference in the language people use, naming specific providers rather than the generic activity. Hence rather than talking about a generic activity such as collaboration, people will often talk exclusively about the specific instance such as Microsoft Sharepoint. This doesn't mean Sharepoint can't be provided as a service, it can and is. However, it represents more of a rental service (often more suited to a subscription basis) as opposed to a utility service.

I know people call both Amazon EC2 & Microsoft Sharepoint both "cloud" services but then that's the awfulness of that dreaded word cloud. Amazon EC2 is far more of a utility service than Microsoft Sharepoint which is still more of a rental service for a product provided over the internet. Collaboration software isn't quite there yet.

You get quite a lot of this "cloudification" of things (for want of a better word). For example, if I take MySQL and run it on EC2 have I created a MySQL Cloud? No, I've stuck a product on a utility service. The process of trying to provide this to a large numbers of users and billing as a utility will certainly end up converting this to a more utility service (databases are evolved enough to be suitable) but whilst the interface to MySQL might remain the same, the system will be very different.

I try to avoid the word "cloud" and instead break down systems into their components. Some are utility whilst others are more products. The two are not the same.

Component A represents an activity which is relatively new (i.e. not widespread and well defined). Though it can be provided through APIs to others, the interface will change very frequently. It really isn't suitable for attempts to provide it on a volume basis through a rental / utility service to others but that won't stop people from trying. This is more suitable for discrete provision to a single or few clients whilst the activity evolves.

I suppose what I want to emphasise here is that the interfaces also evolve as the component evolves. That's a fairly obvious point but one to emphasise. Also when we talk about a services based organisation, then in the technology sense we're mainly talking about building an organisation where many of these interfaces are provided through APIs. However, just because you provide APIs doesn't mean all APIs are the same - some are inherently more stable than others due to how evolved the activity is. You need to factor this in.

The best guidance to building a services based organisation was apparently written in a memo by Jeff Bezos in 2002. I've not seen better since, so I thought I'd copy the spirit of that memo here.

1) All teams will henceforth expose their data and functionality through service interfaces.

2) Teams must communicate with each other through these interfaces.

3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.

4) It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter.

5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

6) Anyone who doesn't do this will be fired.

This was 13 years ago. I just thought I'd emphasise that point because I hear people talking about building a services organisation (e.g. use of micro services) as though this is some radically new idea. I'm not against this, it's a good idea but just be aware - it's not new and this is more about keeping up with the pack rather than gaining an advantage.

Who will win at the Baftas. 2015. by Feeling Listless

Film In an attempt to what's become my routine, I was up bright and early to watch the Baftas being streamed on Facebook (for some reason). The best bit was obviously "Benedict Cumberbatch for The Imitation Game" "No!" mostly because it was the only moment in which either of the presenters, Stephen Fry and Sam Claflin went off script. Anyway, here's the usual predictions based on having only seen a couple of the films.


BIRDMAN Alejandro G. Iñárritu, John Lesher, James W. Skotchdopole
BOYHOOD Richard Linklater, Cathleen Sutherland
THE GRAND BUDAPEST HOTEL Wes Anderson, Scott Rudin, Steven Rales, Jeremy Dawson
THE IMITATION GAME Nora Grossman, Ido Ostrowsky, Teddy Schwarzman
THE THEORY OF EVERYTHING Tim Bevan, Eric Fellner, Lisa Bruce, Anthony McCarten

Boyhood. Though the appearance of The Grand Budapest Hotel is so many categories is gratifying given that its experimental use of changes in aspect ratio and sheer level of not trying to be an awards film. Plus is came out nearly a year ago which shows just how long the Bafta memory is.

’71 Yann Demange, Angus Lamont, Robin Gutch, Gregory Burke
THE IMITATION GAME Morten Tyldum, Nora Grossman, Ido Ostrowsky, Teddy Schwarzman, Graham Moore
PADDINGTON Paul King, David Heyman
PRIDE Matthew Warchus, David Livingstone, Stephen Beresford
THE THEORY OF EVERYTHING James Marsh, Tim Bevan, Eric Fellner, Lisa Bruce, Anthony McCarten
UNDER THE SKIN Jonathan Glazer, James Wilson, Nick Wechsler, Walter Campbell

Under The Skin. Though I haven't seen any of the others. For all I know if I'd seen Paddington this sentence would have begun differently. But I'll never forget seeing Scarlett Johannson walking part Primark.


ELAINE CONSTANTINE (Writer/Director) Northern Soul
GREGORY BURKE (Writer), YANN DEMANGE (Director) ’71
HONG KHAOU (Writer/Director) Lilting
PAUL KATIS (Director/Producer), ANDREW DE LOTBINIÈRE (Producer) Kajaki: The True Story

Pass. Pride?


IDA Pawel Pawlikowski, Eric Abraham, Piotr Dzieciol, Ewa Puszczynska
LEVIATHAN Andrey Zvyagintsev, Alexander Rodnyansky, Sergey Melkumov
THE LUNCHBOX Ritesh Batra, Arun Rangachari, Anurag Kashyap, Guneet Monga
TRASH Stephen Daldry, Tim Bevan, Eric Fellner, Kris Thykier
TWO DAYS, ONE NIGHT Jean-Pierre Dardenne, Luc Dardenne, Denis Freyd

Trash is an interesting choice here. Generally this is thought of as the foreign film category, but Trash was written by Richard Curtis and directed by Stephen Daldry and has Rooney Mara and Martin Sheen in the cast. But's a Brazilian co-production so there it is. Either way the Dardennes will win this.


20 FEET FROM STARDOM Morgan Neville, Caitrin Rogers, Gil Friesen
20,000 DAYS ON EARTH Iain Forsyth, Jane Pollard
FINDING VIVIAN MAIER John Maloof, Charlie Siskel
VIRUNGA Orlando von Einsiedel, Joanna Natasegara

Oooh, oooh, film I've seen. Of course it's an oddity because it won the Oscar for this category last year, insanely beating The Act of Killing which won at the Baftas last year.


BIG HERO 6 Don Hall, Chris Williams
THE BOXTROLLS Anthony Stacchi, Graham Annable
THE LEGO MOVIE Phil Lord, Christopher Miller

MARVEL film (ish) nominated for Bafta. The Lego Movie will win though.


BIRDMAN Alejandro G. Iñárritu
BOYHOOD Richard Linklater
WHIPLASH Damien Chazelle

Boyhood, because why the hell wouldn't it be? As an act of direction, Boyhood is one of the most thrilling examples we've ever seen.


BIRDMAN Alejandro G. Iñárritu, Nicolás Giacobone, Alexander Dinelaris Jr, Armando Bo
BOYHOOD Richard Linklater
WHIPLASH Damien Chazelle

Again - imagine trying to write and rewrite a coherent screenplay on the fly across a decade.


GONE GIRL Gillian Flynn

Insanely difficult brief produces film everyone seems to adore and from a much loved character. As the Postman Pat film shows, this isn't the easiest thing in the world apparently.


EDDIE REDMAYNE The Theory of Everything
RALPH FIENNES The Grand Budapest Hotel

But only because I've seen it, though its fair to say part of the film's ongoing appeal is this performance.


FELICITY JONES The Theory of Everything

I like all of these actresses, so I've fallen back on the tie-breaker rules and chosen whoever's been in Doctor Who.


J.K. SIMMONS Whiplash

Notice how Hawke and Arquette were submitted for supporting actors even though you could argue that a fair percentage of the emotional weight of the film are carried by the two of them.


KEIRA KNIGHTLEY The Imitation Game
RENE RUSSO Nightcrawler

As above, though let's pause for a moment to enjoy the glow of Emma Stone having been nominated for a Bafta.


BIRDMAN Antonio Sanchez

Under The Skin.


BIRDMAN Emmanuel Lubezki
IDA Lukasz Zal, Ryzsard Lenczewski
INTERSTELLAR Hoyte van Hoytema
MR. TURNER Dick Pope

For being able to sustain Wes Anderson's vision across the various aspect ratios.


BIRDMAN Douglas Crise, Stephen Mirrione
THE IMITATION GAME William Goldenberg

But not Boyhood? Here's an interview with editor Sandra Adair, the editor of "Boyhood" which demonstrates what an insane decision this is. She was there for the whole period of production and worked on it for three or four weeks a year during that period.


BIG EYES Rick Heinrichs, Shane Vieau
THE GRAND BUDAPEST HOTEL Adam Stockhausen, Anna Pinnock
THE IMITATION GAME Maria Djurkovic, Tatiana MacDonald
INTERSTELLAR Nathan Crowley, Gary Fettis
MR. TURNER Suzie Davies, Charlotte Watts

Clearly, though it's fair to say I haven't seen Interstellar yet so ...


THE IMITATION GAME Sammy Sheldon Differ
INTO THE WOODS Colleen Atwood
MR. TURNER Jacqueline Durran

Clearly, though it's fair to say I haven't seen Into The Woods yet so ...


GUARDIANS OF THE GALAXY Elizabeth Yianni-Georgiou, David White
INTO THE WOODS Peter Swords King, J. Roy Helland
MR. TURNER Christine Blundell, Lesa Warrener

Hello the craft categories. Hello Guardians of the Galaxy. Clearly should have been nominated for Best Picture, but Bafta tends to be even pretty conservative in these things but the whole thing is a nonsense anyway isn't it etc etc


AMERICAN SNIPER Walt Martin, John Reitz, Gregg Rudloff, Alan Robert Murray, Bub Asman
BIRDMAN Thomas Varga, Martin Hernández, Aaron Glascock, Jon Taylor, Frank A. Montaño
THE GRAND BUDAPEST HOTEL Wayne Lemmer, Christopher Scarabosio, Pawel Wdowczak
THE IMITATION GAME John Midgley, Lee Walpole, Stuart Hilliker, Martin Jensen
WHIPLASH Thomas Curley, Ben Wilkins, Craig Mann

The film I've seen rule.


DAWN OF THE PLANET OF THE APES Joe Letteri, Dan Lemmon, Erik Winquist, Daniel Barrett
GUARDIANS OF THE GALAXY Stephane Ceretti, Paul Corbould, Jonathan Fawkner, Nicolas Aithadi
THE HOBBIT: THE BATTLE OF THE FIVE ARMIES Joe Letteri, Eric Saindon, David Clayton, R. Christopher White
INTERSTELLAR Paul Franklin, Scott Fisher, Andrew Lockley
X-MEN: DAYS OF FUTURE PAST Richard Stammers, Anders Langlands, Tim Crosbie, Cameron Waldbauer

They're all worthy and I'm only using my MCU bias here to choose. Presumably it'll actually go to Dawn of the Planet of the Apes for its leap forward in mo-cap.

THE EE RISING STAR AWARD (voted for by the public)


Doctor Who rule for playing the sister of the best companion in the new series. Margot Robbie second for being in my favourite new show in 2011, Pan-Am.

The Charlie Hebdo Massacre: We Need a Coherent Alternative Idea by Albert Wenger

Why write here about the massacre at the offices of the French magazine Charlie Hebdo? Because most of what I have been writing on Continuations is about developing a coherent view of what comes next for humanity. It is about finding a way forward in a world of extraordinary new technological capabilities and dislocations. The attack on Charlie Hebdo stands for and is part of a fundamental rejection of not just the type of future I envision but also of the modernity we currently live in. This is not a religious struggle, it is an ideological one. And in this we are at a distinct disadvantage: ISIS et al have a relatively coherent idea of the future, however appalling we may find it, whereas we don’t. And that’s an issue at a time when more and more people feel that the existing state of modernity is failing them.

The worst thing we could do right now is to stoke islamophobia or double down on the misguided war on terror. Attacks like this and 9/11 are in no small part designed to do just that. When I was growing up in Germany during the days of the Red Army Faction it was an avowed goal of the terrorists to bring out the worst in the German state. In the US, post 9/11 we have certainly more than obliged including the wars in Iraq and Afghanistan, Guantanamo, drones and secret government surveillance programs here in the US and abroad. All of that and pseudo intellectual attacks against Islam or worse yet personal attacks on Muslims are exactly the wrong thing now.

We have been in a similar place before and it is worth looking back for some insights. Fascism too is an ideology that emerged at a time of social upheaval caused by technology during the industrial revolution. Its roots go back as far as the late 1800s even though we tend to associate it most with Hitler and the rise of Nazism. Fighting it back then was made “easier” by two factors: first, it became clearly identified with Germany as a country and traditional warfare and second, there was a relatively coherent alternative idea around individual liberty and capitalism inside of strong nation states.

So once again we have an ideology that is a response to social upheaval caused by technology. But this time — at least so far — the new ideology is growing in a much more distributed and networked fashion. And many people all across the world are disillusioned with their current governments which they feel are ineffective and/or representing elites and big business. The renewed and distributed rise of fascism itself should warn us just how far we are from a coherent alternative. This is why I will continue in my own small way to contribute to what looks utopian now but could be the way forward.

Astrobites at AAS 225: Day 3 by Astrobites

Welcome to Wednesday!

Plenary Talk: The Interactions of Exoplanets with their Parent Stars (by Meredith Rawls)

The day started with a talk by Katja Poppenhaeger. She gave a compelling presentation about the different ways stars and planets can influence each other. It was interesting to see how much of the interactions go both ways: it is not just the star affecting the planet, but it is also the planet affecting the star.

Press Conference: Eta Carinae (by Meredith Rawls)

This press conference was all about one fascinating, massive, and extremely energetic star system called Eta Carinae.

The most interesting part was when speaker Thomas Medura pulled out a 3D-printed model of the system. If you have access to a 3D printer, the blueprint is freely available online so you can make your very own Eta Carinae.

Plenary Talk: Inflation and Parallel Universes: Science or Fiction? (by Erika Nesvold)

Max Tegmark, author of Our Mathematical Nature: My Quest for the Ultimate Nature of Reality, gave a fascinating plenary talk about multiverses. He first defined “our” universe as “everything in the observable universe.” If space exists beyond the observable universe, we can’t see it because light from that region has not had time to reach us in the age of the universe. Tegmark then described four “levels” of possible alternate universes, with increasing amounts of controversy and, frankly, weirdness.

The first type of alternate universe, Level I, is just the space that exists past the boundaries of the observable universe. This one’s not very controversial. We can’t observe this part of space, so we can’t scientifically test whether it exists, but it should be space much like our own, with the same physical laws. Inflation predicts that space is infinite, so if you believe inflation (if you can measure other testable predictions of inflation), logic dictates that Level I alternate universes exist. We will be able to observe more and more parts of Level I space as the universe gets older and allows light to travel farther.

Level II universes describe regions in our own space that experienced different amounts of inflation. These regions would be infinitely far away (because you’d have to travel through inflating space to get there), and would have the same fundamental physical laws, but different values for things like the cosmological constant.

Level III universes are probably what you think of when you hear about “parallel universes” from science fiction. These universes are predicted by collapse-free quantum mechanics. For example, if a particle exists in two states at once (like Schrödinger’s cat) and is then measured, two universes may split apart: one in which the particle is measured to be in state 1, and another in which the particle is measured to be in state 2. You can produce a lot of parallel Level III universes this way! This level is fairly controversial, although Tegmark pointed out that the Level III universes should all have the same old boring laws of physics.

Level IV universes are the highest on the weirdness scale. These universes will have different fundamental laws of physics, and we don’t really know what they would look like.

Lunch Interlude: Outreach with Science Train (by Meredith Rawls)

During the lunch hour, a small group of astronomers took to the streets and nearby transit center to proselytize science. This was Lucianne Walcowicz’s idea, and it was a blast. Sometimes you can strike up the best conversations in unexpected places. We talked with folks from all walks of life about everything from the accelerating expansion of the Universe to colonizing exoplanets.


Press Conference: Seminar for Science Writers (by Meredith Rawls)

The afternoon press conference had a slightly different format. Instead of brand new results from the Hubble Space Telescope, this session featured a recap of Hubble’s entire 25-year history. The idea was for science writers to get an opportunity to write a longer piece reflecting on Hubble’s rich history. The telescope truly has played a remarkable role in scientific discoveries and as a cultural icon.


The afternoon plenary sessions included a lot of discussion about cosmology, including Planck results that added more data to support the idea that the universe is flat. Tune in tomorrow for our summary of the last day of talks!

January 08, 2015

Soup Safari #13: Parnsip and Honey at Bretta & Co. by Feeling Listless

Lunch. £5.00. Bretta&Co., 5 Heathfield St (off Bold Street), Liverpool L1 4AT. Tel : 0151 709 6369. Website.

Subscriptions (feed of everything)

Updated using Planet on 29 January 2015, 06:48 AM