Francis’s news feed

This combines together on one page various news websites and diaries which I like to read.

March 01, 2015

Leonard Nimoy: A Science Fan’s Appreciation by The Planetary Society

Mat Kaplan pays a heartfelt tribute to a science fiction icon.

February 28, 2015

Corpse too Bright? Make it Bigger! by Astrobites

Title: “Circularization” vs. Accretion — What Powers Tidal Disruption Events?
Authors: T. Piran, G. Svirski, J. Krolik, R. M. Cheng, H. Shiokawa
First Author’s Institution: Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem


Our day-to-day experiences with gravity are fairly tame. It keeps our GPS satellites close and ready for last-minute changes to an evening outing, brings us the weekend snow and rain that beg for a cozy afternoon curled up warm and dry under covers with a book and a steaming mug, anchors our morning cereal to its rightful place in our bowls (or in our tummies, for that matter), and keeps the Sun in view day after day for millennia on end, nourishing the plants that feed us and radiating upon us its cheering light.  Combined with a patch of slippery ice, gravity may produce a few lingering bruises, and occasionally we’ll hear about the brave adventurers who, in search of higher vistas, slip tragically off an icy slope or an unforgiving cliff.  But all in all, our experiences with gravity in our everyday lives is a largely unnoticed, unheralded hero that works continually behind the scenes to maintain life as we know it.

Park yourself outside a relatively small but massive object such as the supermassive black hole lurking at the center of our galaxy, and you’ll discover sly gravity’s more feral side. Gravity’s inverse square law dependence on your distance from your massive object of choice dictates that as you get closer and closer to said object, the strength of gravity will increase drastically: if you halve your distance to the massive object, the object will pull four times as hard at you, if you quarter your distance towards the object, it’ll pull sixteen times as hard at you, and well, hang on tight to your shoes because you may start to feel them tugging away from your feet. At this point though, you should be high-tailing it as fast as you can away from the massive object rather than attending to your footwear, for if you’re sufficiently close, the difference in the gravitational pull between your head and your feet can be large enough that you’ll stretch and deform into a long string—or “spaghettify” as astronomers have officially termed this painful and gruesome path of no return.


Figure 1. A schematic of the accretion disk created when a star passes too close to a supermassive black hole. The star is ripped up by the black hole, and its remnants form the disk. Shocks (red) generated as stellar material falls onto the disk produce the light we observe. [Figure taken from today’s paper.]

While it doesn’t look like there’ll be a chance for the daredevils among us to visit such an object and test these ideas any time soon, there are other things that have the unfortunate privilege of doing so: stars. If a star passes closely enough to a supermassive black hole so that the star’s self-gravity—which holds it together in one piece—is dwarfed by the difference in the gravitational pull of the black hole on one side to the star to the other, the black hole raises tides on the star (much like the oceanic tides produced by the Moon and the Sun on Earth) can become so large that it deforms until it rips apart.  The star spaghettifies in what astronomers call a tidal disruption event, or TDE, for short. The star-black hole separation below which the star succumbs to such a fate is called its tidal radius (see Nathan’s post for more details on the importance of the tidal radius in TDEs). A star that passes within this distance sprays out large quantities of its hot gas as it spirals to its eventual death in the black hole. But the star doesn’t die silently.  The stream of hot gas it sheds can produce a spectacular light show that can lasts for months. The gas, too, is eventually swallowed by the black hole, but first forms an accretion disk around the black hole that extends up to the tidal radius. The gas violently releases its kinetic energy in shocks that form near what would have been the original star’s point of closest approach (its periapsis) and where the gas wraps around the black hole then collides with the stream of newly infalling stellar gas at the edge of the disk (see Figure 1). It is the energy radiated by these shocks that eventually escape and make their way to our telescopes, where we can observe them—a distant flare at the heart of a neighboring galaxy.

Or so we thought.


TDEs, once just a theorist’s whimsy, have catapulted in standing to an observational reality as TDE-esque flares have been observed near our neighborly supermassive black holes. An increasing number of these have been discovered through UV/optical observations (the alternate method being X-rays), which have yielded some disturbing trends that contradict the predictions of the classic TDE picture. These UV/optical TDEs aren’t as luminous as we expect. They aren’t as hot as we thought they would be and many of them stay the same temperature rather than decrease with time. The light we do see seems to come from a region much larger than we expected, and the gas producing the light is moving more slowly than our classic picture suggested. Haven’t thrown in the towel already?

But hang on to your terrycloth—and cue in the authors of today’s paper. Inspired by new detailed simulations of TDEs, they suggested that what we’re seeing in the optical is not the light from shocks in an accretion disk that extends up to the tidal radius, but from a disk that extends about 100 times that distance. Again, shocks from interacting streams of gas—but this time extending up to and at the larger radius—produce the light we observe. The larger disk automatically solves the size problem, and also conveniently solves the velocity problem with it, since Kepler’s laws predict that material would be moving more slowly at the larger radius. This in turn reduces the luminosity of the TDE, which is powered by the loss of kinetic energy (which, of course, scales with the velocity) at the edge of the disk. A larger radius and lower luminosity work to reduce the blackbody temperature of the gas. The authors predicted the change that each of the observations inconsistent with the classic TDE model would undergo under the new model, and found that they agreed well with the measured peak luminosity, temperature, line width (a proxy for the speed of the gas), and estimated size of the emitting region for seven TDEs that had been discovered in the UV/optical, and found good agreement.

But as most theories are wont to do, while this model solves many observational puzzles, it opens another one: these lower luminosity TDEs radiate only 1% of the energy the stellar remains should lose as they are accreted onto the black hole.  So where does the rest of the energy go?  The authors suggest a few different means (photon trapping? outflows? winds? emission at other wavelengths?), but all of them appear unsatisfying for various reasons.  It appears that these stellar corpses will live on in astronomers’ deciphering minds.

Highlights from our reddit Space Policy AMA by The Planetary Society

The space policy and advocacy team at The Planetary Society held an AMA (ask me anything) on reddit, here are some of the highlights.

Pluto Science, on the Surface by The Planetary Society

New Horizons' Principal Investigator Alan Stern gives an update on the mission's progress toward Pluto.

February 27, 2015

Would You Like To Play A Game? by Feeling Listless

Games Find above Nature journal's own mini-documentary about DeepMind's proficiency in playing computer games (here a link to the paper). The story has been all over the intelligent media this past couple of days. Here's a version from The New Yorker:

"Deep neural networks rely on layers of connections, known as nodes, to filter raw sensory data into meaningful patterns, just as neurons do in the brain. Apple’s Siri uses such a network to decipher speech, sorting sounds into recognizable chunks before drawing on contextual clues and past experiences to guess at how best to group them into words. Siri’s deductive powers improve (or ought to) every time you speak to her or correct her mistakes. The same technique can be applied to decoding images. To a computer with no preëxisting knowledge of brick walls or kung fu, the pixel data that it receives from an Atari game is meaningless. Rather than staring uncomprehendingly at the noise, however, a program like DeepMind’s will start analyzing those pixels—sorting them by color, finding edges and patterns, and gradually developing an ability to recognize complex shapes and the ways in which they fit together."
The best news is that the programme's not very good at Pac-Man because it's rubbish at forward planning. So far.

FCC Votes to Keep Internet Access Open for Innovation by Albert Wenger

The FCC just ruled to ban fast lanes for internet access. Consumers have already paid with their bills for bandwidth and they understood that cable companies were trying to double dip by also charging content originators for the same bandwidth. Such paid fast lanes would have deeply engrained existing incumbents and made competition and innovation in content and services much more difficult.

Huge kudos to all the organizations that worked so diligently to inform individuals and policy makers about this issue. A big thank you also to John Oliver who gave this effort a much needed jolt by memorably rebranding it from “net neutrality” (which you can search for to find all my prior posts).

Of course this is just the beginning of a process. Next up is the possibility of a law suit by some broadband providers as well as attempts at legislative endruns around this. But in the meantime I will celebrate this outcome by making a donation to Fight for the Future and encourage everyone to do the same.

Black Holes Grow First in Mergers by Astrobites

Title: Following Black Hole Scaling Relations Through Gas-Rich Mergers
Authors: Anne M. Medling, Vivian U, Claire E. MaxDavid B. Sanders, Lee Armus, Bradford Holden, Etsuko Mieda, Shelley A. Wright, James E. Larkin
First Author’s Institution: Research School of Astronomy & Astrophysics, Mount Stromlo Observatory, Australia National University, Cotter Road, Weston, ACT 2611, Australia
Status: Accepted to ApJ

It’s currently accepted theory that every galaxy has a super massive black hole (SMBH) at it’s center. These black holes have been observed to be strongly correlated with the galaxy’s bulge mass, total stellar mass and velocity dispersion.

Figure 1: NGC 2623 - one of the merging galaxies observed in this study. Image credit: NASA.

Figure 1: NGC 2623 – one of the merging galaxies observed in this study. Image credit: NASA.

The mechanism which drives this has long thought to be mergers (although there are recent findings showing the bulgeless galaxies which have not undergone a merger also host high mass SMBHs) which causes a funneling of gas into the center of a galaxy, which is either used in a burst of star formation or accreted by the black hole. The black hole itself can regulate both its and the galaxy’s growth if it becomes active and throws off huge winds which expel the gas needed for star formation and black hole growth on short timescales.

To understand the interplay between these effects the authors of this paper study 9 nearby ultra luminous infrared galaxies in a range of stages through a merger and measure the mass of the black holes at the center of each. They calculated these masses from spectra taken with the Keck telescopes on Mauna Kea, Hawaii by measuring the stellar kinematics (the movement of the stars around the black hole) as shown by the doppler broadening of the emission lines in the spectra. Doppler broadening occurs when gas is emitting light at a very specific wavelength but is also moving either towards or away from us (or both if it is rotating around a central object). Some of this emission is doppler shifted to larger or smaller wavelengths effectively smearing (or broadening) a narrow emission line into a broad one.

Figure 1: The mass of the black hole against the stellar velocity dispersion, sigma, of the 9 galaxies observed in this study. Also shown are galaxies from McConnel & Ma (2013) and the best fit line to that data as a comparison to typical galaxies.

Figure 2: The mass of the black hole against the stellar velocity dispersion, sigma, of the 9 galaxies observed in this study. Also shown are galaxies from McConnel & Ma (2013) and the best fit line to that data as a comparison to typical galaxies. Originally Figure 2 in Medling et al. (2015)

From this estimate of the rotational velocities of the stars around the centre of the galaxy, the mass of the black hole and the velocity dispersion can be calculated. These measurements for the 9 galaxies in this study are plotted in Figure 1 (originally Fig. 2 in Medling et al. 2015) and are shown to be either within the scatter or above the typical relationship between black hole mass and velocity dispersion.

The authors run a Kolmogorov-Smirnoff statistical test on the data to confirm that these merging galaxies are drawn from a completely different population to those that lie on the relation with a p-value of 0.003, i.e. the likelihood of the these merging galaxies being drawn from the same population as the typical galaxies is 0.3%.

The black holes therefore have a larger mass than they should for the stellar velocity dispersion in the galaxy. This suggests that the black hole grows first in a merger before the bulges of the two galaxies have fully merged and settled into a gravitationally stable structure (virialized). Although measuring the velocity dispersion in a bulge consisting of two bulges merging is difficult and can produce errors in the measurement, simulations have shown that the velocity dispersion will only be underestimated by 50%; an amount which is not significant enough to change these results.

The authors also consider whether there has been enough time since the merger began for these black holes to grow so massive. Assuming that both galaxies used to lay on typical scaling relations for the black hole mass, and that the black holes accreted at the typical rate (Eddington rate), they find that it should have taken somewhere in the range of a few 10-100  million years – a time much less than the simulated time for a merger to happen.

A second consideration is how long it will take for these galaxies to virialize and for their velocity dispersion to increase to bring each one back onto the typical scaling relation with the black hole mass. They consider how many more stars are needed to form in order for the velocity dispersion in the bulge to reach the required value. Taking measured star formation rates of these galaxies gives a range of timescales of about 1-2 billion years which is consistent with simulated merger timescales. It is therefore plausible that these galaxies can return to the black hole mass-velocity dispersion relation by the time they have finished merging.

The authors conclude therefore that black hole fueling and growth begins in the early stages of a merger and can outpace the formation of the bulge and any bursts in star formation. To confirm this result measurements of a much larger sample of currently merging galaxies needs to be taken – the question is, where do we look?

Russia Moves to Support ISS through 2024, Create New Space Station by The Planetary Society

The future of the International Space Station is a little clearer this week, following a statement from Russia supporting an extension of the orbiting complex through 2024.

February 26, 2015

Guardians of the IMAX. by Feeling Listless

Film Back in the olden days, one of my monthly treats was the dvd release of classic Doctor Who, that one thing I'd have to look forward to. These ended at about this time last year and until whatever in the hell's going on with the #omnirumour is resolved, assuming dvds are still a thing, they're not likely to begin again. The one remaining story with extant episodes, The Underwater Menace has also apparently been pulled from release this year. Click here for the inevitable campaign.

Looking for something to replace this lack, I've begun buying MARVEL films on blu-ray on a monthly basis.  Iron Man 3 in January.  Thor: The Dark World in February and for March, although I jumped the gun because I saw a cheap copy in CEX this morning, Guardians of the Galaxy.  Since the Who releases were never in broadcast order, I decided this homage to that schedule was allowed.  Captain America: The Winter Soldier can wait until April.

Either way, I watched Guardians again this evening and despite the lack of the shock of the new, as expected it's still brilliant and all those people who say it isn't are entirely incorrect.  New readers can look here for a long rambly post describing my appreciation for the thing and its implications for the film industry.  I still think it's game changing.  I still think it was the most important film of last year in industry terms.

All of which is preamble to introducing one of the oddities about the blu-ray release in its 3D edition.  In the 3D edition there are two discs, which is surprisingly rare, one for 3D and one for 2D.  Most of the other 3D films I have include both versions on the same disc, so it's curious that this doesn't.  But owning a 3D set-up I ventured forth.  Began watching and beyond the opening sequences, there's a ratio change from 2.40 to filling the screen.

But the 3D is horrible.  Smushy and difficult to watch, which is sometimes the case with my room set-up and eyesight.  I assume it must look better for others, but not for me, so I decided to try the 2D version instead.  So I swap discs, FF through to the point I was at and there is no ratio change.  The 2D edition of this film is a different version of the film to the 3D version, my guess is because the 3D disc contains the IMAX version.

Quandary.  I'd quite like to watch this IMAX version with its extra screen information in 2D please.  It's not supplied.  What to do?  What to do?  

Turns out I can turn the 3D off on the television.  The television will let me watch 3D films in 2D.  How curious, how strange, but in this instance how useful.  So I did.

Two things:

(1)  Why would MARVEL not supply the IMAX version of Guardians as per the Nolan Batman films, Tron Legacy and The Hunger Games: Catching Fire in 2D anyway, perhaps by offering the choice on the 3D disc without having to mess about with the television?

(2)  Are all the MARVEL films like this?  Have I missed out on IMAX versions of Iron Man 3 and Thor: The Dark World by only buy the 2D versions of those?  Do you own these?  Can you say?

Either way, the sci-fi vistas of James Gunn's film look utterly gorgeous across my giant flatscreen and if you've not seen this IMAX version yet it's well worth the hokus-pokery.

Here's James Gunn and the cast talking about this IMAX version in promos from YouTube that contains precisely no footage from the IMAX version:

How Much is Enough by Robert and Edward Sidelsky (Book Review) by Albert Wenger

Over vacation I read “How Much is Enough?" by Robert and Edward Sidelsky. I highly recommend the book as it goes after a central question that we should all be asking both individually and collectively.

The book starts with an excellent chapter on Keynes's now frequently cited essay on “Economic Possibilities for our Grandchildren.” Their basic diagnosis of where Keynes went wrong is that he assumed a separation of needs and wants, expecting our economy to grow rapidly enough to provide for everyone’s needs. And while that did happen, our wants have proven to be insatiable.

The second chapter titled “The Faustian Bargain” is brilliant. It provides a nutshell intellectual history of how and why we came to abandon the distinction between needs and wants. This chapter alone is worth the price of the entire book, including tracing the evolution of the character of Faust and connecting the Catholic concept of felix culpa to the self-interested individual actors of modern economics.

The third chapter on “The Uses of Wealth” is equally compelling in its broad reach across both time and geography. The fourth chapter makes a compelling case that “Happiness Economics" is at best circular and at worst would support supplying the entire population with drugs.

The first chapter that I found myself disagreeing with was the fifth which asks “Limits to Growth: Natural or Moral?” While it provides a reasonably strong argument that we should not shy away from explicitly valuing nature for itself, I found the sections that were more or less dismissing limits due to pollution and in particular carbon dioxide to conflict directly with my own views. Nothing wrong with that per se except that the logic in this section seem seriously faulty, in particular on non-linear change.

Chapter six, which lays out the Sidelski’s version of “the good life" feels oddly dry in comparison to the rest of the book. Maybe that’s because the seven elements they identify — health, security, respect, personality, harmony with nature, friendship and leisure — come across as more common sense than inspiring. Or it could be that I found myself disagreeing with some of the specific characterizations of these elements, such as confusing security with stability or an overly strong association of friendship with marriage. As in the previous chapter what shines through here are some deeply held moral convictions that are insufficiently investigated.

The seventh and final chapter looks at regulatory changes that might get us away from economic growth for its own sake and towards a “good life” for more and more people. I was happy to see Basic Income Guarantee being one of the proposals. I also like the idea of taxes on advertising and financial trading transactions. The other proposal is a generalized consumption tax, which they argue might be more palatable than a progressive income tax. This is an interesting idea that I am planning to spend more time on.

Anybody interested in the history and future of capitalism should read this book, which, really, should be everyone. Forcing one to consider the basic question of “How much is enough?” and providing the intellectual context alone is hugely valuable even if you disagree with or are unmoved by some of the recommendations.

There are tantalizing passages about leisure and its changing conception over time throughout the book. I would have loved to see more on what it would take to return to the more historic concept of leisure as activities that require and develop skills (the book hints at but doesn’t elaborate a need for a different education, which the authors acknowledge in the afterword). I believe that a lot of inspiration could flow exploring what more of that type of leisure could do for humanity.

The hacker ethos starts at home by Zarino

About a week ago, Adrian McEwen—IoT guru and co-founder of Liverpool’s first makerspace—wrote a blog post about what he called the “DoES Liverpool Ethos.”

It’s worth a read, if only for the examples of how DoES Liverpool (the aforementioned makerspace) has used bits and bobs of kit, and a few hours of volunteered time, to make natty, useful, real-world systems you simply couldn’t buy off a shelf.

Like their DoorBot that plays a personal theme tune when you clock into the hotdesking space. Or their Weeknotes script that trawls Twitter for mentions of @DoESLiverpool and #weeknotes, and posts a digest on the DoESLiverpool blog every Monday morning.

The organisation [DoES Liverpool] has an internalised sense of the malleability of the world that software and digital fabrication brings.

Maker Night at DoES Liverpool

Adrian being Adrian, he also takes two swipes at design agencies.1

But after that, he gets to the nub of the discussion: That we need this—for want of a better word—hacker ethos to spread outside of geeky introverted hackspaces, and into wider industry.

Adopting this culture of […] making know-how would give a company an advantage over any of its competitors who haven't yet realised how the world is changing.

Because the world is changing. Fast, cheap, omnipresent networking. Fast, cheap, Arduinos and Raspberry Pis. Next day electronics deliveries. Stack Overflow… More and more is now possible with less and less.

Francis Irving holding an Arduino-powered kite at an Awesome Liverpool gift night

What previously could only have been achieved by engaging a large electronics manufacturer, crafting bespoke a system they’d hope to re-sell to cover their costs, or contracting the Crapitas of this world, with all the pain that brings – what previously was out of the reach of small to medium enterprises is now in their grasp.

…If they think to grasp it.

I’d say barely 1% of the UK’s businesses have realised the opportunity presented by low-cost embedded software and digital fabrication.

But then it took many businesses easily two decades to realise that the web would open them up to new markets, new products, and new customers. And we’re only just reaching the point where there’s enough demand to sustain businesses like Squarespace, that bring cheap, off-the-shelf business websites to the masses.

Make Do And Mend poster from the 1940s

It sort of reminds me of the “make do and mend” attitude of the second World War. With supplies tight, and time at a premium, people were forced to find innovative, malleable solutions to their problems. Companies, too, became a little more malleable, a little less traditional. They took on a female workforce, adjusted hours, brought in new technology, re-purposed existing tools for new jobs, and found innovative new markets for byproducts. When the shit hit the fan, industry adopted just a little of what Adrian might call the DoES Liverpool ethos.

How can we take that ethos, and help it permeate into standard business practice? Just as the “make do and mend” attitude spread from the country cottages and urban tenements of 1930s Britain into every aspect of the wartime economy?

Women working in a WWII munitions factory

I can’t help feeling that this movement has to start at home.

A few years ago, corporate sysadmins sat smugly atop mountains of locked-down Blackberry handsets. There was almost no flexibility, no innovation in business IT provision. Then came the iPhone and the iPad. Suddenly everyone—from the janitor, to the CEO—was bringing their own devices to work, and expecting the system to cope. The sysadmins thought they could stem the tide, ban personal devices, but eventually even they realised they had to go with the change. The industry shifted in the face of new technology, and new expectations from its employees.

Outside the office, the same happened in the air. We are in the middle of a huge about-face by the aviation industry. After decades of requiring passengers to switch off all electronic devices—for absolutely no good reason—the sheer weight of customers who simply expected to be able to use their kit on a plane has forced the operators—slowly, ever so slowly—to change their stance. The industry shifted in the face of new technology, and new expectations from its customers.

We are just a little sooner along that curve with the hacker ethos. Some businesses will jump on board as early adopters – they’ll hire innovative freelancers like Adrian and Paul. But, for others, it will take a shift in culture.

With each new generation of employees, and each new generation of customers, it will become harder and harder for industries to resist change.

When you’ve been brought up on a childhood of “hacking” systems—customising Minecraft worlds with Python code, not purchase orders, or pasting JavaScript into your browser URL bar to extract content from a webpage—you simply won’t stand for antequated, inflexible, wasteful business practices at work.

Schoolkids participating in a Raspberry Pi Codejam

That’s why I have such respect for people like Alan O'Donohoe, Code Club, and the Raspberry Pi movement. We should be teaching everyone to hack. Not because it’ll help them get jobs (though it will). Not because we want them to become cyber criminals or privacy extremists (though some inevitably will). But because a generation of hackers and makers will force an imperceptible but unstoppable shake-up of British business.

This is the sort of shake-up you can’t affect via regulation. You can’t promise it in a manifesto pledge, and it doesn’t neatly fit into a campaign trail soundbyte. But it’s what this country really needs if we’re ever going to realise the potential of the truly democratising technology at our fingertips.

There is a hacker ethos – or DoES Liverpool ethos, or whatever you want to call it. But if you ask me, before it changes the world, it all has to start at home.

  1. He argues agencies take too long to effect change over-use use pre-existing “one-size-fits-all” components to minimise effort. I’m not convinced: Both are simply symptoms of poor management or poor design, and both are just as likely to happen to a freelance maker as to an established agency. 

Open Educational Resources for Astronomy by Astrobites

Last Spring I taught an introductory astronomy course for non-science majors. It was difficult and fun. One of the most difficult parts was inventing activities and homework to teach specific concepts. Sometimes my activities fell flat. Thankfully, I had access to a number of astronomy education resources: the textbook, a workbook full of in-class tutorials, and the professors in my department who had previously taught introductory astronomy.

Open educational resources are meant to serve in this capacity, especially for teachers and students without the money to buy expensive texts. Like open source software, open educational resources are publicly-licensed and distributed in a format that encourages improvement and evolution. For examples, check out the resources hosted by Wikiversity. This is a sister project of Wikipedia’s, providing learning materials and a wiki forum to edit and remix those materials. It’s great for teachers in all disciplines! And it hosts a lot of astronomy material. But like Wikipedia, it’s deep and wide. It’s easy to get lost. (“Wait, why am I reading about R2D2?”) And like articles on Wikipedia, the learning materials on Wikiversity vary in quality.

Today’s paper introduces a project called astroEDU. They’re aiming to make astronomy learning resources, like those you can find on Wikiversity, easier to find and of higher quality. To do this, the authors introduce a peer-review structure for education materials modeled on the one widely-accepted for scholarly research. Educators may submit a learning activity to the astroEDU website. The project is evaluated by two blind reviewers, an educator and an astronomer. It may go through revision, or it may be scrapped. If it’s not scrapped, it’s published on the website, and sister sites like Open Educational Resources Commons. The result is a simple, excellent lesson plan describing the learning goals, any objects you need to complete the activity, step by step instructions, and ideas to find out what your students learned.

Screenshot from

Screenshot from “Star in a Box”, an educational activity available at astroEDU.

To the right is a screenshot from an example activity, “Star in a Box“, which won an award last year from the Community for Science Education in Europe. It uses a web-based simulation tool developed by the Las Cumbres Observatory. Students are directed to vary the initial mass of a model star and explore its evolution in the Hertzsprung-Russell plane. This is the kind of thing I could have used to supplement the textbook in my introductory astronomy course. And so could a high school teacher struggling along without any textbooks.

AstroEDU is targeted at primary and secondary school teachers. It was launched only a year ago, supported by the Inernational Astronomical Union’s Office for Astronomy Development. It may grow into a powerful tool for open educational resources, something like a peer-reviewed Wikiversity. If you are a professional astronomer or an educator, it looks like you can help by signing up as a volunteer reviewer.

At last, Ceres is a geological world by The Planetary Society

I've been resisting all urges to speculate on what kinds of geological features are present on Ceres, until now. Finally, Dawn has gotten close enough that the pictures it has returned show geology: bright spots, flat-floored craters, and enigmatic grooves.

Dawn Journal: Ceres' Deepening Mysteries by The Planetary Society

Even as we discover more about Ceres, some mysteries only deepen. Mission Director Marc Rayman gives an update on Dawn as it moves ever closer to its next target.

February 25, 2015

A different cluetrain by Charlie Stross

Right now, I'm chewing over the final edits on a rather political book. And I think, as it's a near future setting, I should jot down some axioms about politics ...

  1. We're living in an era of increasing automation. And it's trivially clear that the adoption of automation privileges capital over labour (because capital can be substituted for labour, and the profit from its deployment thereby accrues to capital rather than being shared evenly across society).

  2. A side-effect of the rise of capital is the financialization of everything—capital flows towards profit centres and if there aren't enough of them profits accrue to whoever can invent some more (even if the products or the items they're guaranteed against are essentially imaginary: futures, derivatives, CDOs, student loans).

  3. Since the collapse of the USSR and the rise of post-Tiananmen China it has become glaringly obvious that capitalism does not require democracy. Or even benefit from it. Capitalism as a system may well work best in the absence of democracy.

  4. The iron law of bureaucracy states that for all organizations, most of their activity will be devoted to the perpetuation of the organization, not to the pursuit of its ostensible objective. (This emerges organically from the needs of the organization's employees.)

  5. Governments are organizations.

  6. We observe the increasing militarization of police forces and the priviliging of intelligence agencies all around the world. And in the media, a permanent drumbeat of fear, doubt and paranoia directed at "terrorists" (a paper tiger threat that kills fewer than 0.1% of the number who die in road traffic accidents).

  7. Money can buy you cooperation from people in government, even when it's not supposed to.

  8. The internet disintermediates supply chains.

  9. Political legitimacy in a democracy is a finite resource, so supplies are constrained.

  10. The purpose of democracy is to provide a formal mechanism for transfer of power without violence, when the faction in power has lost legitimacy.

  11. Our mechanisms for democratic power transfer date to the 18th century. They are inherently slower to respond to change than the internet and our contemporary news media.

  12. A side-effect of (7) is the financialization of government services (2).

  13. Security services are obeying the iron law of bureaucracy (4) when they metastasize, citing terrorism (6) as a justification for their expansion.

  14. The expansion of the security state is seen as desirable by the government not because of the terrorist threat (which is largely manufactured) but because of (11): the legitimacy of government (9) is becoming increasingly hard to assert in the context of (2), (12) is broadly unpopular with the electorate, but (3) means that the interests of the public (labour) are ignored by states increasingly dominated by capital (because of (1)) unless there's a threat of civil disorder. So states are tooling up for large-scale civil unrest.

  15. The term "failed state" carries a freight of implicit baggage: failed at what, exactly? The unspoken implication is, "failed to conform to the requirements of global capital" (not democracy—see (3)) by failing to adequately facilitate (2).

  16. I submit that a real failed state is one that does not serve the best interests of its citizens (insofar as those best interests do not lead to direct conflict with other states).

  17. In future, inter-state pressure may be brought to bear on states that fail to meet the criteria in (15) even when they are not failed states by the standard of point (16). See also: Greece.

  18. As human beings, our role in this picture is as units of Labour (unless we're eye-wateringly rich, and thereby rare).

  19. So, going by (17) and (18), we're on the receiving end of a war fought for control of our societies by opposing forces that are increasingly more powerful than we are.

Have a nice century!


a) Student loans are loans against an imaginary product—something that may or may not exist inside someone's head and which may or may not enable them to accumulate more capital if they are able to use it in the expected manner and it remains useful for a 20-30 year period. I have a CS degree from 1990. It's about as much use as an aerospace engineering degree from 1927 ...

b) Some folks (especially Americans) seem to think that their AR-15s are a guarantor that they can resist tyranny. But guns are an 18th century response to 18th century threats to democracy. Capital doesn't need to point a gun at you to remove your democratic rights: it just needs more cameras, more cops, and a legal system that is fair and just and bankrupts you if you are ever charged with public disorder and don't plead guilty.

c) (sethg reminded me of this): A very important piece of the puzzle is that while capital can move freely between the developed and underdeveloped world, labour cannot. So capital migrates to seek the cheapest labour, thereby reaping greater profits. Remember this next time you hear someone complaining about "immigrants coming here and taking our jobs". Or go google for "investors visa" if you can cope with a sudden attack of rage.

What's wrong with my private cloud ... by Simon Wardley

These days, I tend not to get too involved in cloud (having retired from the industry in 2010 after six years in the field) and my focus is on improving situational awareness and competition  (in my view a much bigger problem) through techniques such as mapping.  I do occasionally stick my oar into the cloud due to some very elementary mistakes that appear. One of those has raised its head again, namely the cost advantages / disadvantages of private cloud.

First, private cloud was always a transitional play which would ultimately head to niche unless a functioning competitive market forms enabling a swing from centralised to decentralised. This functioning market doesn't exist at the infrastructure layer and so the trend to niche for private cloud is likely to accelerate. There's a whole bunch of issues related to the impact of ecosystems (in terms of innovation / customer focus and efficiency rates), performance and agility, effective DevOps, financial instruments (spot market, reserved), capacity planning / use, focus on service operation and potential for sprawl which can work out very negatively for private cloud but these are beyond the scope of this post. I simply want to focus on cost and to make my life easier - the cost of compute.

Here's the problem. Two competitors going head to head in a field, both with over £15Bn in annual revenue and both spending over £150 M p.a. on infrastructure (i.e. compute, hosting etc). Now, this IT component represents a good but not huge chunk of their annual revenue (1%).

One of the competitors was building a private cloud. It was quite proud of the fact that it reckoned it had achieved almost parity with AWS EC2 (taking in this case a comparison to a m3.medium) of around $800 per year. What made them happy is that the public Amazon cost is around $600 per year and so they weren't far off but they gained all the "advantages" of being a private cloud. This is actually a disaster and was caused by three basic mistakes (ignoring all the other factors listed above which make private cloud unattractive).

Mistake 1 - Externalities.

The first question I asked (as I always do) is what percentage of the cost was power? The response was the usual one that fills me with dread - "That comes from another budget". Ok, when building a private cloud you need to take into consideration all costs from power, building, people, cost of money etc etc. On average the hardware & software component tends to be around 20-25% of the cost. Power tends to be the lion share. So, they weren't operating at anywhere close to $800 per equivalent per year but instead closer to $3,000 per year. This on the face of it was a 5x differential but then there's - future pricing.

Mistake 2 - Future pricing

Many years ago I calculated how efficiently you could run a large scale cloud and from this guesstimated that AWS EC2 was running at over 80% margin. But how could this be? Isn't Amazon the great low margin, high volume business? The problem is constraint. 

AWS EC2 is growing rapidly and compute happens to be elastic i.e. the more we reduce the price, the more we consume. However, there's a constraint in that it takes time, money and resource to bring large scale data centres online. With such constraints it's relatively easy to reduce the price so much that demand exceeds your ability to supply which is the last thing you want. Hence you have to manage a gentle decline in price. In the case of AWS I guesstimated that they were focused on doubling total capacity each year.  Hence, they'd have to manage the decline in price to ensure demand kept within the bounds of their supply factoring in the natural reduction of underlying costs. There are numerous techniques that can also help i.e. increasing size of default instance etc but we won't get into that.

Though their price is currently $600 per year, I take the view that their costs are likely to be sub $100 which means that a lot of future price cuts are on the way.  The recent 'price wars' from Google seem more about Google trying to find where the price points / constraints of Amazon are rather than a fully fledged race to the bottom. All the competitors have to keep a watchful eye on demand and supply.

Let us assume however, that AWS is less efficient than I think and the best price they could achieve is $150. This suddenly creates a future 20x differential between the real cost of the private environment. However, it's no big shakes because even if the competitor was using all public cloud (i.e. data centre zero) which is unlikely then it simply means they're spending $7.5M compared to our $150M and whilst they might be $140M of saving this is peanuts to our revenue ($15Bn+) and the business that's at stake. It's not worth the risk.

This couldn't be more wrong.

Mistake 3 - Future Demand

Cloud computing simply represents the evolution of a world of products to a world of commodity and utility services. This process of "industrialisation" has repeated many times before in our past and has numerous known effects from co-evolution of practice (hence DevOps), rapid increases in higher order systems, new forms of data (hence our focus on big data), increases in efficiency, a punctuated equilibrium (exponential change), failure of companies stuck behind inertia barriers etc etc.

There's a couple of things worth noting. There exists of long tail of unmet business demand for IT related projects. Compute resources are elastic. The provision of commodity (+utility) forms of an activity enable rapid development of often novel higher order systems (utility compute allows a growth in analytics etc) which in turn evolve and over time become industrialised themselves (assuming they're successful). All of this increases demand for the underlying component.

Here's the rub. In 2010 I could buy a million times more compute resource than in the 1980s but does that mean my IT budget for compute has reduced a million fold in size during that time? No. What happened is I did more stuff.

In the same way, Cloud is extremely unlikely to reduce IT expenditure on compute (bar a short blip in certain cases) because we will end up doing more stuff. Why? Because we have competitors and as soon as they start providing new capabilities then we have to co-opt / adapt etc.

So, let us take our 20x differential (which in all likelihood is much higher) and assume we're building our private cloud for 5yrs+ giving time for price differentials to become clear. Our competitor isn't going to reduce their IT infrastructure spending, they are likely to continue spending $150M p.a. but do vastly more stuff. However, in order to maintain parity with this given the differential then we're going to need to be spending closer to $3Bn p.a. In reality, we won't do this but we spend vastly more than we need and we will still just lose ground to competitors - they'll have better capabilities than we do.

You must keep in mind that some vendors are going to lose out rather drastically to this shift towards utility services. However, they can limit the damage if you put yourself in a position where you need to buy vastly more equipment (due to the price differential) just to keep up with your competitors. This is not being done for your benefit. When you're losing your lunch, sometimes you can recover by feasting on a few individuals. If that means playing to the inertia of would be meal times (FUD, security, loss of jobs etc) then all is fair in war and business. You must ask yourself whether this is the right course of action or am I being lined up as someone's meal ticket?

Now, I understand that there are big issues with heading towards data centre zero and migrating to public cloud often related to the legacy environments. This is a major source of inertia particularly as architectural practices have to change to cope with volume operations of good enough components (cloud) compared to past product based practices (e.g. the switch from scale up to scale out or N+1 to design for failure etc). We've had plenty of warning on this and there's all sorts of sensible ways for managing this from the "sweat and dump" of legacy to (in very niche cases) limited use of private. 

But our exit cost from this legacy will only grow over time as we add more data. We're likely to see a crunch on cloud skills as demand rockets and this isn't going to get easier. There is no magic "enterprise cloud" which can enable future parity with all the benefits of commodity and volume operations but provided with non-commodity products customised to us in order to make our legacy life easier. The economics just don't stack up. 

By now, you should be well on your way to data centre zero (several companies are likely to reach this in 2015). You should have been "sweating and dumping" those legacy (or what I prefer to call toxic) IT assets over the last four+ years. You should have very limited private capacity for those niche cases which you can't migrate unless you can achieve future pricing parity with EC2 (be honest here, are you including all the costs? What is your cost comparison really?). You should have been talking with regulators to solve any edge cases (where they actually exist). You should be well on the way to public adoption.

If you're not then just hope your competitors return the favour. Be warned, don't just believe what they say but try and investigate. I know one CIO who has spent many years telling conferences why their industry couldn't use public cloud whilst all the time moving their company to public cloud. 

Some companies are going to be sorely bitten by these three mistakes of externalities, future pricing and future demand. Make sure it isn't you.

-- Additional note

Be very wary of the hybrid cloud promise and if you're going down that route make sure it translates to "a lot public with a tiny bit of private". This is a trade off but you don't want to get on the wrong side and over invest in private. There's an awful lot of vendors out there encouraging you to do what is actually not in your best interest by playing to inertia (cost of acquiring new practices, cost of change, existing political capital etc). There's some very questionable "consultant" CIOs / advisors giving advice based upon not a great deal and dubious beliefs.

Try and find someone who knows what they're talking about and has experience of doing this at reasonable scale e.g. Bob Harris, former CTO for Channel 4 or Adrian Cockroft, former Director Engineering Netflix. These people exist, go get some help if you need it and be very cautious about the whole private cloud space.

-- Additional note

Carlo Daffara has noted that in some cases, the TCO of private cloud (for very large, well run environments) can achieve 1/5th of AMZN public pricing. Now, there are numerous ecosystem effects around Amazon along with all sorts of issues regarding variability, security of power, financial instruments (reserved instances etc) and many of those listed above which create a significant disadvantage to private cloud. But in terms of pure pricing (only part of the puzzle) then a 1/5th AMZN public pricing is just on the borderline of making sense for the immediate future. Anything above this and then you're into dangerously risky ground. 

With Charley Pollard: Series One. by Feeling Listless

Audio  Here we are then, after all the preamble, with the real opening chapter of the Eighth Doctor audios, albeit with some new retrospective additions.  Half way through the first I realised I first heard these when I was in my twenties.  I'm forty years old.  In chronological terms, there's now a longer period between the original publication of Storm Warning and now than between the TV movie and Rose.  Actually thinking on, the revival has now been on-air longer than that gap too, which is irrelevant to this cause but jiggers, makes me feel old.  But I was only twelve when the classic series finished which makes me younger than some.  Sorry.  Up until this point, my purchasing of the Big Finish audios had been pretty sporadic and remained so.  The Eighth Doctor audios are the only complete run I stuck with which meant that these initial series had the genuine feeling of being in seasons with a long gap in between, and even longer beyond Neverland.  If they seem so familiar listening to them again now, it's because I had them on repeat in the months between seasons.

Storm Warning

At the risk of pitching over old ground, but what I failed to notice while writing my tribute to Alan Barnes's story back in 2013, in the opening ten minutes or so the Doctor isn't talking to himself, he's talking to us, which of course he is, but its in such a way that we effectively become his companions, we become the viewpoint characters. In reintroducing this incarnation to the audience, Barnes makes our process of switching on a cd player (as it was back then) a real world equivalent of Ian and Barbara blundering into a junkyard, Rose into a department store basement or Christmas Clara up a spiral staircase, of entering his world. But what's really interesting is when Charley's introduced later, that doesn't change. He has someone to talk to, but towards the end, when he realises how he's potentially affected time, we're listening to him soliloquise, regarding is new companion from afar, confiding in us as to what he may have done. Sound familiar?

The Sword of Orion

Cybermen! Primarily of interest here is how little it resembles the new Who approach to second stories for companions in that Charley's pretty much TARDISed in already. The visit to the market at the start of the play could be seen as a forerunner to The Rings of Arkanoid, perhaps, but Pollard's already pretty much accepted the concept of time travel and the blue box and all of that business. The closest parallel is Peri in The Caves of Androzani who in the opening scenes already seemed like she'd been travelling with the Doctor for months (and as Big Finish later revealed ...) Also contains one of the scariest moments in Who history as three of the main characters enter a conversion chamber and nothing good comes from it. I've always been intrigued as to how much of the script and story changed from the AudioVisuals version to accommodate McGann. Incidentally, some soul has animated the first few scenes and uploaded the results to YouTube.

The Light at the End

Dropped into this slot by a number of online continuities, thanks to Charley's repeated mention of the R101, apart from the lack of reference for poor Ramsey the Vortisaur, it fits perfectly especially since Eighth is treated as the "present" incarnation and the lead in figure in much the same way as Third, Fifth and Eleventh do elsewhere.  Though it's worth noting that the story generally foregrounds Fifth, Sixth and Seventh, perhaps in order to create some balance due to their lack of major televisual participation during the Fiftieth.  Of the Eighth material, I'm not entirely sure about him sodding off when Charley's in distress but it's not entirely out of character based on what we've heard up until now.  If you haven't it's also probably worth listening to for the world's best Russell Tovey impression by John Dorney as Bob Dovie.  I must confess, having entirely attempted to stay spoiler free on first time of listening, I actually thought it was Russell Tovey.

The Stones of Venice

Which is about as conventional as Paul Magrs gets, which is saying something given that it's about Venice falling to the sea, coaxed along by fish people, so a homage to The Underwater Menace I suppose.  If this first series is about box ticking and the Eighth Doctor, then this is the one with Michael Sheard.  To this day, I'm not sure what I make of it.  On the one hand it's hilariously theatrical in language and performance and you can see its original origins as a Fourth Doctor tale bubbling across the surface but the four episode structure does it no favours in terms of drawing out the action and as with all of these original stories the Doctor's rather dragged along by events rather than motivating them.  But McGann and Fisher are clearly having a ball, their chemistry crystallizing with each word.

Minuet in Hell

Brigadier Alistair Gordon Lethbridge-Stewart! Fly meet ointment. As his Datacore entry shows the post-classic series timeline for the Brig is all over the place, having him either live until at least 2050 thanks to his own regeneration in the New Adventures or dying in time for Kate Stewart to talk about him the past tense.  The audios have never quite managed to work out how they work in relation to the books.  On the one hand, at time of production, the Sam mentioned here was supposed to be Sam Jones (later retconned to be Samson) but makes a nonsense of The Dying Days which was also the Brig's first encounter with Eighth.  Either way, I always find Minuet in Hell desperately difficult to get through, overlong and unfunny as it is in places, only really snapping into place when the Doctor's memory returns (and how often is that the case with this incarnation?).  His and the Brig's final conversation is incredibly poignant.

Destiny of the Doctor: Enemy Aliens

Bloomin' marvellous.  The Eighth Doctor entry in AudioGo's contribution to the 50th anniversary this was the unexpected pleasure which brought Charley as close to canonicity as we thought she could ever be until McGann said her name on actual television (albeit on the red button).  A Buchan pastiche of all things, as I said at the time, you can hear India remembering how to play the character after a few years out as it goes along, slipping from simply reading the text to acting it about ten minutes in, with her scenes with Michael Maloney's mysterious helper Hilary offering some brand new audio drama.  Writer Alan Barnes is in his element, inventing a missing adventure which occurs just before this in the same time period which sounds like the kind of messy, overfl but thrilling story which might have turned up in the novels.  One of the highlights of these stories was hearing the given reader offer us their Matt Smith impression and this doesn't disappoint on that score either.

Beware the Post Money Trap by Albert Wenger

In the current valuation environment many entrepreneurs seem to believe that only two numbers matter in a financing: the amount of the raise and the dilution. This leads them to buy into the idea that more money for the same dilution is always strictly better. Combined with a lot of money being available from investors this is resulting in Series A rounds of $10 million and more.

What could possibly go wrong? The number everyone seems to be forgetting about is the post-money valuation. It is a crucial number though as long as a company is not yet financed to profitability. It determines how far the company needs to come to be able to raise money again. It needs to build enough value so that the next round of fundraising can be at or ideally above the current post money valuation.

If you do a Series A with a $50 million post-money, it means you have to build something that people will consider to be worth $50 million when you next raise money. Now if your company hits a great growth trajectory and the financing environment stays as it is then great. But if either of those two conditions are not met you will find yourself in the post money trap.

Again, you can get caught in this trap in two different scenarios. The first one is that you hit a bump in the road. Users or revenues or whatever the most relevant metric for your business wind up not growing as fast as you think or worse yet hitting a temporary plateau, possibly even a small setback just as you need to raise more money. The second one is that the external financing environment adjusts for instance because the stock market drops 20%. Then even if you hit all your milestones, suddenly that may no longer let you clear the hurdle you set for yourself.

Some founders seem to ignore this logic entirely. Others come back and say “but we will have that much more money to and hence time to clear the hurdle.” That too, however, is faulty logic. It reminds me a lot of the problem of getting rockets into space. The simplistic answer would seem to be: just add more fuel. The problem though is that fuel too weighs something which now needs to be lifted into space. Your burn rate is pretty much the same thing. Unless you are super disciplined on how you spend the money you will have a higher burn rate the more you raise which makes subsequent funding harder (instead of easier).

Another, less common, founder objection is: well, if necessary we will just do a down round. This ignores that down rounds are incredibly hard to do. For reasons of founder, employee and investor psychology they rarely happen. And if they do they are often damaging to the company. So when you are in the post-money trap you have largely made your company non-financeable entirely.

Finally, this situation is highly asymmetric from the point of view of funds versus companies. First, funds have portfolios, so some deals with dangerously high post money valuations can be offset — if one is disciplined — with others that are more attractive. Second, when investing in preferred there is a lot of downside protection built in that’s not available to the common shares. Hence a simple test to see just how far you are stretching into is to ask investors (and better yet yourself) how much common they (you) would buy right now and at what price.

A New Type of Stellar Feedback by Astrobites

Title: Stellar feedback from high-mass X-ray binaries in cosmological hydrodynamical simulations

Authors: M. C. Artale, P. B. Tissera, & L. J. Pellizza

First Author’s Institution: Instituto de Astronomia y Fisica del Espacio, Ciudad de Buenos Aires, Argentina

Paper Status: Accepted for publication in MNRAS

The commonly accepted theory of dark matter (called Lambda-CDM) provides us with good general understanding of how structure (filaments, galaxy clusters, galaxies, etc.) forms in our Universe. This is shown again and again in simulations of our Universe in a box, made by following the motion of dark matter particles and their clustering as controlled by gravity. However, problems arise in trying to reproduce the detailed baryonic properties (namely stars and gas) of real galaxies. One of the biggest problems is faithfully reproducing the star formation history of real galaxies, both in terms of when they form (early in the Universe or more recently) and how many form.  As it turns out, “feedbackprocesses are essential in reproducing properties of galaxies, including supernova explosions which can heat, ionize, and remove gas from galaxies (shutting off star formation). In many cases supernovae may be the dominant form of feedback, but, as the authors of today’s paper explore, it may not be the only important form of feedback, especially in the early Universe.

High-mass X-ray binaries (HMXB’s) are binary star systems consisting of a massive star orbiting either a neutron star or a black hole. These systems produce a significant amount of X-ray emission, originating from the accretion of the massive star onto the more compact companion; this is shown in an artist’s drawing in Fig. 1. In some cases, these systems can produce fast moving jets, dumping kinetic energy into the surrounding gas. Both of these can heat, ionize, and blow away gas within a galaxy, and in turn play a significant role in how stars form within the galaxy. The authors indicate that recent work has shown that these systems are more powerful in the early Universe, and may play a role in controlling star formation in early galaxies that will have a lasting impact for their entire evolution. For the first time, the authors implement a HMXB feedback method into a cosmological hydrodynamics code, and explore how it affects galaxy evolution in the early Universe.

Fig. 1:

Fig. 1: An artist’s conception of a black hole binary, where a massive star (right) is orbiting a black hole (left). The massive star is loosing gas to the black hole, forming an accretion disk and jets. (Image Credit: NASA)

Modelling the Feedback from HMXB’s

The authors use a version of the GADGET-3 smooth particle hydrodynamics (SPH) code that includes a chemical evolution model, radiative heating and cooling (allowing the gas to heat and cool by radiating away or absorbing energy from photons) , and methods for star formation and supernova feedback from Type II and Type Ia supernova. New in this work, however, is a model to simulate the feedback from HMXB’s. Due to the finite resolution of the simulations, they construct a model that accounts for a population of HMXB’s in a galaxy, rather than individual HMXB’s. Using known estimate for the number distribution of stars (known as the IMF), the authors estimate that about 20% of massive stars that form black holes in a given galaxy form in binary systems. In their simulations, these systems deposit 1052 erg of kinetic energy into their surroundings (about 10 times that of a single supernova explosion) , and spend about 3 million years radiating X-rays at a luminosity of about 1038 erg s-1. These numbers are not well constrained, however, but the authors tested a few different values and found these to produce the most realistic amount of feedback (as compared to observations).

The authors produce two primary cosmological simulations, evolving from high redshift down to a redshift of about 2 (about 5 billion years from present day), using only supernova feedback (designated S230-SN) and supernova feedback + HMXB feedback (designated S230-BHX). The authors use these two simulations to compare how a HMXB feedback model affects the total rate of star formation in their simulations, and the properties of star formation of individual galaxies in their simulations. Some of these results are discussed below.

Controlling Star Formation

Fig. 2:

Fig. 2: The star formation rate density in the entire simulation (i.e. how much mass is converted into stars per year per unit volume, computed for the entire simulation box) as a function of redshift for the SN only (blue, dashed) and SN + HMXB (red, solid) simulations. The horizontal axis is redshift, with lower redshift (z=2) closer to present day than higher redshift (z=10). The black data points are observations from Behroozi et. al. 2013. (Source: Fig. 1 of Artale et. al.)

Fig. 2 compares the effect of the SN (blue) vs. SN + HMXB (red) feedback models on the star formation of the entire simulation box as a function of redshift, from the early Universe (right) to z = 2 (left). The vertical axis gives the star formation rate (SFR) density in the entire simualtion. The SFR density is computed as the total mass of gas converted into stars per unit time per unit volume. At high redshift (early time), the HMXB feedback suppresses the total SFR density, and delays star formation towards lower redshift (later time). The higher SFR density in the HMXB model at lower redshifts is due to the fact that there is much more gas leftover at lower redshifts to be transformed into stars, since it wasn’t used up as dramatically at high redshifts as the SN feedback only model.

Fig. 3: Ratio of galaxy gas mass to dark matter halo mass for galaxies in the simulations as a function of their dark matter halo mass. This is plotted for three redshifts, z = 7, z = 6, and z = 4, for the SN only model (dashed) and the SN + HMXB model (solid). (Source: )

Fig. 3: Ratio of galaxy gas mass to dark matter halo mass for galaxies in the simulations as a function of their dark matter halo mass. This is plotted for three redshifts, z = 7, z = 6, and z = 4, for the SN only model (dashed) and the SN + HMXB model (solid). (Source: Artale et. al. 2015 )

The three most significant (in terms of mass) components in a galaxy are its dark matter halo, gas, and stars. The two feedback models have a significantly different effect on the amount of gas contained within galaxies over time. Fig. 3 shows the ratio of galaxy gas mass to galaxy dark matter mass as a function of the galaxy’s dark matter halo mass over three redshifts, z =7 (green triangles), z = 6 (blue squares), and z = 4 (violet circles). The SN only simulated is shown with the dashed lines, and SN + HMXB with solid lines. As shown, the SN only simulations have less gas at all redshifts and at almost all halo masses. In the SN only model, the higher SFR at early times uses up more gas, and drives bigger outflows, causing overall less gas to remain within the galaxy. The HMXB + SN, however, reduces SFR at early times without driving large gas outflows, allowing for gassier galaxies that have gradually increasing SFR (Fig. 2) towards low redshifts, as the effectiveness of HMXB’s decreases.

Finding the Right Balance

Getting feedback and star formation right in simulations (in part) means reproducing star formation history over cosmic time. The authors used HMXB with SN feedback to reduce star formation in galaxies at high redshift (in better agreement with observations than SN feedback alone), while maintaining it at low redshift (i.e. delaying star formation). This is a promising step in developing a complete model of feedback in galaxies.

Ask Me Anything (on reddit) About NASA's Budget by The Planetary Society

Starting at 11am PST/2pm EST on Wednesday, the space policy team at the Society will hold an AMA (Ask Me Anything) about NASA's new budget and the process of space exploration.

February 24, 2015

My Favourite Film of 2007 by Feeling Listless

Film Watching Danny Boyle's Sunshine was an odd experience because unlike my latter spoilerphobic ways, I'd been following the production of the film through the blog on the official website written by Gia Milinovich. Here she is sneaking onto the airlock set early in filming.  Produced at a time when blogs were still in their imperious stage, just before more Facebook, Twitter and other social media came along and burnt it all down (where this is posted surely but smouldering embers), it struck a balance between teasing story elements, investigating the science (including videoed talks with her husband Dr Brian Cox who was the science advisor on the film) and glimpses of the production.  It's all here and here.  Gia offers some general memories about the experience here.

What was odd was that even after all of that reading, sitting in front of the giant screen at, I think, Picturehouse at FACT Screen Two, everything fell away.  Much like Soderbergh's remake of Solaris a few years before and Gravity a few years later, once the luminous, awe inspiring mass of the celestial body filled the screen, I entirely forgot that I was watching a fictional construct (almost), my suspension of disbelief fully engaged.  That's usually how I know I'm enjoying a film.  My liberal arts knowledge about genre and editing and narrative and stars and all of that, my critical stance, disappear (usually - sometimes I'll grow to love film because of those things, that it knows that I know and so does something amazing because of that).

None of which stopped me from writing a review of this film for this blog in which I singled out Chris Evans as a "mega star in waiting" but as I noticed there hadn't been many films in that period about "giant ships flying through deep space on the big screen without a Jedi or Vulcan piloting them" (last year MARVEL produced huge blockbusters so I was already predicting the success of the MCU a year before the release of Iron Man).  It is a film which has grown in stature across time and along with Moon and the afformened Solaris and Gravity demonstrate that intelligent, or at the very least meditative science fiction is still being made (I'd also add The Last Days on Mars but no one saw that).  Certainly there aren't many films of this ilk which could provide enough material for an hour long discussion like this:

My dvd copy of Sunshine was bought at the old Zavvi shop at Liverpool One just before it transmuted into Head then closed anyway.  Watching it again that evening on a smaller screen (well small than cinema) did little to diminish its power and indeed increased it as often happens when you know the fate of characters, the mundane moments before disaster strikes given extra poignancy because you now have knowledge about their lives that isn't available to you with your unpredictable future.  Ignoring the disaster element of that sentence, that's also why production blogs like Gia's are still compelling.  After having seen the film, it's a way of looking back to a time when outcomes were uncertain, when we didn't know what the results would be.  I wonder what Gia thinks about her blog now.

No Need for De-trending: Finding Exoplanets in K2 by Astrobites

Title: A systematic search for transiting planets in the K2 data

Authors: Daniel Foreman-Mackey et al.

First Author’s Institution: New York University

In May 2013, the Kepler mission came to an end with the failure of a critical reaction wheel. This reaction wheel was one of four that was responsible for keeping Kepler focused on a fixed field of view so that it could perform its mission: to continuously monitor the brightness of the same 150,000 stars, in order to detect the periodic dimming caused by extrasolar planets crossing in front of their host star.

A year later, in May 2014, NASA announced the approval of the K2 “Second Light” proposal,  a follow-up to the primary mission (described in Astrobites before: here, here), allowing Kepler to observe —although in a crippled state— a dozen fields, each for around 80 days at a time, extending the lifetime of humanity’s most prolific planet hunter.

The degraded stabilization system induces, however, severe pointing variations; the spacecraft does not stay locked-on target as well as it could previously. This leads to increased systematic drifts and thus larger uncertainties in the measured stellar brightnesses, degrading the transit-detection precision — especially problematic for traditional Kepler analysis techniques. The authors of today’s paper present a new analysis technique for the K2 data-set, a technique specifically designed to be insensitive to Kepler’s current pointing variations.

Fig 1: The K2 mission is Kepler’s second chance to get back into the planet-hunting game. Kepler’s pointing precision has however degraded, but novel pointing-insensitive analysis techniques aim to make up for that. Image credit: NASA/JPL.

Fig 1: The K2 mission is Kepler’s second chance to get back into the planet-hunting game. Kepler’s pointing precision has however degraded, but novel pointing-insensitive analysis techniques aim to make up for that. Image credit: NASA/JPL.

Traditional Kepler Analysis Techniques

Most of the previous analysis techniques developed for the original Kepler mission included a “de-trending” or a “correction” step —where long-term systematic trends are removed from a star’s light curve— before the search for a transiting planet even begins. Examples of light-curves are shown in Figure 2. Foreman-Mackey et al. argue that such a step is statistically dangerous: “de-trending” is prone to over-fitting. Over-fitting generally reduces the amplitude of true exoplanet signals, making their transits appear shallower — smaller planets might be lost in the noise. In other words, de-trending might be throwing away precious transits, and real planets might be missed in the “correction” step!

Fig 2: Upper left panel: An illustration of the maximum likelihood fit (green lines) to a K2 light-curve (black dots) obtained from the authors’ new data-driven model. Bottom left panel: The residual scatter i.e. the “de-trended” light-curve, for a given star in the K2 field. The right panel shows another “de-trended” light curve for a different star where the transit events are more evident (marked with green vertical lines). However, unlike traditional previous analysis techniques, this new method never uses detrended light curves in the analysis; it is only used for qualitative visualization and manual hand-vetting purposes.

Fig 2: Upper left panel: An illustration of the maximum likelihood fit (green lines) to a K2 light-curve (black dots) obtained from the authors’ new data-driven model. Bottom left panel: The residual scatter i.e. the “de-trended” light-curve, for a given star in the K2 field. The residuals are very small, indicating a good fit. The right panel shows a de-trended light-curve for another star (EPIC 201613023) where the transit events are more evident (marked with green vertical lines). De-trended light-curves like the one on the right were commonly used with past analysis techniques. Foreman-Mackey’s et al. method, however, never uses de-trended light-curves in the search for transits, only for qualitative visualization and hand-vetting purposes (see further description of their method below). Figures 2 (left), and 4 (right) from the paper.

A new analysis method

In light of these issues, the authors therefore sat down to revise the traditional analysis process. They propose a new method that simultaneously fits for a) the transit signals and b) systematic trends, at the same time, effectively bypassing the “de-trending” step.

Their transit model is a rigid model that models each transit to have a specific phase, period and duration. On the other hand, their method to model the systematic trends is much more flexible than the rigid parametrization for the transits. The authors assume that the dominant source of light-curve systematics is due to Kepler’s pointing variability. There are other factors that play a role too, like stellar variability, and detector thermal variations, but the authors focus specifically on modelling the spacecraft-induced pointing variability.

Fig 3: An illustration of the top 10 eigen light-curves. A set of 150 make up the authors’ linear data-driven analysis technique. Having a linear model has enormous computational advantages.

Fig 3: An illustration of the top 10 basis light-curves. A set of 150 make up the authors’ linear data-driven analysis technique. Having a linear model has enormous computational advantages.

Equipped with large amounts of computer power, Foreman-Mackey et al. create a flexible systematics model, consisting of a set of 150 linearly independent basis light-curves (see Figure 3). These are essentially a set of light-curves that one can add together—with different amplitudes and signs— to effectively recreate any observed light-curve in the K2 data-set. The basis light-curves themselves are found by a statistical method called Principal Component Analysis (PCA) —a method to find linearly uncorrelated variables that describe an observed data-set— which will not be described further in this astrobite, but the interested reader can read more here. The choice to model the systematics linearly has enormous computational advantages, as the computations reduce to using familiar linear algebra techniques. The authors note that the choice to use exactly 150 of them was not strictly optimized, but that number allowed for enough model-flexibility to effectively model the non-linear pointing-systematics, while keeping computational-costs reasonable.

With a set of basis light-curves the analysis pipeline then proceeds to systematically evaluate how well the joint transit-and-systematics model describes, and finds transits, in the raw light-curves (see again Figure 2). Finally, the pipeline returns the signals that have passed the systematic cuts: light-curves with the highest probability of having transits.

Results, Performance & Evaluation

The authors apply their analysis pipeline to the full K2 Campaign 1 data-set (including 21,703 light-curves, all publicly available here). The systematic vetting step returned 741 signals, which were further manually hand-vetted by the authors; throwing out, for example, obvious eclipsing binaries. The authors end with a list of 36 planet candidates transiting 31 stars —effectively multiplying the current known yield from K2 by a factor of five! Figure 4 (left) summarizes the distribution of the reported candidates in the radius-period plane.

Fig 4: Left: The fractional radii of the reported 36 planet candidates as a function of their period. Right: Detection efficiency in the radius-period plane, calculated by injecting synthetic transit signals into real K2 light curves, and calculating the fraction that are successfully recovered. Figures 11 (left) and 9 (right) from the paper.

Fig 4: Left: The fractional radii of the reported 36 planet candidates as a function of their period. Right: Detection efficiency in the radius-period plane, calculated by injecting synthetic transit signals into real K2 light curves, and calculating the fraction that are successfully recovered. Figures 11 (left) and 9 (right) from the paper.

The authors also discuss the performance and detection efficiency of their method. By injecting synthetic transit signals into real K2 light curves and measuring the fraction that are correctly identified and recovered by their pipeline, gives an estimate of the expected performance under real analysis conditions. From Figure 4 (right) we see that their analysis technique performs the best for short-period large-radius planets. This makes sense: larger planets have larger transit signals, and shorter orbital periods increase the number of observed transits; they are more probable to be successfully found.

Lastly, the authors share their data products online, along with their pipeline implementation under the MIT open-source software license. This means that anyone can take a stab at reproducing their findings, or perhaps even find new transits signals! So, now I want to ask you, dear reader, will you lengthen the list of known planet candidates today?

Spacewalk Timelapse Makes Cable Routing Look Fun by The Planetary Society

A timelapse video shows two NASA astronauts as they became typical neighborhood cable technicians—except for the fact that they were wearing space suits.

Clouds and Chasmata by The Planetary Society

New landscapes from Mars Express.

February 23, 2015

#potterwatch by Feeling Listless

Film The people who follow me on the social media that wasn't hip in the early noughties will have noticed that for the best part of a week I've been watching my way through the Harry Potter films for the first time since they were released. Potter's been a fair weather friend to this blog, me never having been that much of a fan. Should you care to glance through this search you'll find the odd review, links to some news items, awards predictions and the odd bit of commentary, notably from back in 2010 when regular reader Annette asked for my opinion of it during the end of year Review in 2010.  She'll be displeased to know (judging by the comment) that I still haven't read the books.  I don't read much fiction especially if I know it's going to be turned into a film.  But that's a discussion for another time.

There's isn't anything in this piece I don't still agree with and having forgotten most of what I'd written there, because I forget most of anything I write here, there's some analysis in there which I thought again this time around as though it was all new, especially about the screenplays and narrative and how unlike typical Hollywood screenplays but like art house films, they're not especially goal orientated in the traditional sense, the screenplays (mainly by Steve Kloves) favouring cumulative, episodic incident that beneficially ignore three or four act structures in favour of the broadest of linking tissue between scenes that eventually leads to a climactic struggle about something connected to the title.  As a series it resembles levels of a video game generally with each film's Professor of Defence Against the Dark Arts as the end of level boss and he who shall not be named as the main boss at the end.

Having watched it in tandem with the fourth season of Game of Thrones, it's also possible to see that now that the Potter film cycle is completed, it most resembles is an eight episode television mini-series of single episodes ending in a two-parter (the recap at the start of The Deathly Hallows Part Two is nothing so much as what's always happened in Doctor Who).  Perhaps if modes of production had been different back then, if Warners hadn't been desperate to have its Lord of the Rings follow-up, the best place for these stories would have been television, where JK's texts would have had room to breath, with incidents and characters so obviously abbreviated in the films given room to develop.  Though there's obviously little chance you would have gotten this cast and this quality of production design on the small screen even with a network like HBO taking a chance, and it's impossible to think of these characters being played by anyone else now.

Perhaps this is just me, having spent that past week with those actors and characters and this story somewhat desperate to see more of it, because, well, I confess, I've become something of a fan.  Not necessarily of the story.  However enjoyable the meander, there is a lot of waiting for something to happen via delayed exposition (see below) but the atmosphere, the incidents, the humour, the, well everything which exists during those waits.  There are moments, such as when Luna (one of the best characters to appear on film) simply turns up to dinner in fancy dress as a lion for no particular reason and no one comments on it which are just awesome.  To watch the #potterwatch hashtag on my Twitter account over the past week, is to watch someone falling in love with a series of films in a way which has surprised even me.  Why should this be the case?  Since I've written myself into a corner, here are at least three potential reasons.

(1)  Hermione Granger.  Yes, yes, Emma Watson is amazing, but under the pen of JK, Cloves, this actress and the production team, unlike similar figures so many similar clones, as is acknowledged throughout the films, of the three central characters, Hermione's the most powerful and more importantly the cleverest.  Even when she's catatonic during The Chamber of Secrets, she's the one who ultimately points Ron and Harry towards said receptacle.  Hers is a story of empowerment, fighting against the patriarchy at every turn and as Buzzfeed video it would be wrong not to link to right now says giving "zero fucks".  This against a background of extra prejudice within the wizard world thanks to her "muggle" origin.  Plus despite all that, despite theoretically being capable of getting any man, she falls for Ron Weasley, which feels really normal and human and with due deference to JK, having her turn into Harry's arms would have been just wrong.

(2)  Storytelling.  I've already touched on this but there's some very clever adaptation which seems to have gone on here.  Book readers will be able to correct me, but the approach to adaptation seems to be to have essentially hacked out anything which isn't in Harry's POV then gone in and concreted the cracks where necessary.  There are only a handful of scenes which don't have him somewhere in them, which means that there are also few occasions when the audience is privy to information he isn't.  In terms of audience identification, it's brilliant even if it goes against the natural tendency of drama (see Hitchcock and the bomb under the table) and leads to plenty of scenes which feature Harry sitting around being told stuff by individuals and groups.  Bless Daniel Radcliffe for being able to do this convincingly.  If you want to see how young actors incrementally develop their craft, watch these films back to back.

Some examples of where this works well: When Ron disappears in The Deathly Hallows, a more conventional film would have crosscut the two Hs in the tent with lonely Weasley but instead it stays with Harry and we're left to wonder, like Harry, where his friend is and what he's doing.  When he returns, Ron tells us his story giving Rupert Grint one of the best speeches in the trilogy by forcing our imaginations to create some magic which could have been entirely conventional if it had simply been put on screen.  Plus I can't think of a flashback which isn't seen or experienced through Harry receiving some potion and we're left, like him, to knit the actuality together across the years with Snape's motivations only properly revealed in the final film (see this montage which edits them all together in chronological order and sob).

Sometimes it can be frustrating but there's an in-story reason for Harry not being told about things and so the information being withheld from us too.  His connection with Voldemort.  Though its not ensuated clearly at the beginning, there's a constant fear throughout that this dark lord could easily break into Harry's mind and steal this information which is why its on a need to know.  Realising this during the Philosopher's Stone puts an extra complexion on the films as we're essentially we're sharing the frustration with Harry.  It would have been very easy to have simply shown us the conversation between the Order of the Phoenix during that film before Harry arrives, but the tension's far more palpable in making us and him wait for it.  That's storytelling which has been thought about, that is.

(3)  Imagery.  These are simply gorgeous looking films filled with spectacle.  Although granted some of the creature designs don't quite work in places, notably Hagrid's brother Grawp, the production design is spectacular.  The Department of Mysteries, a totally digital environment, has all the qualities of an art piece, especially as it implodes, the blue light of the crystal balls and shelves shattering into one another.   The middle hour of The Deathly Hallows Part One is one of the best pieces of fantasy cinema ever, both because in the midst of this massive expensive blockbuster and it's about three people losing their sanity in a tent and they're doing it against an ominous wilderness away from all their civilisations, portrait like close-ups against wide-angle landscapes.

None of which really does it justice.  Even the flaws, like Tom Felton's one-note performance until the very late stages (not helped by only really having a surname as his single line of dialogue) and Quiddich making little to no sense as a sport (is there ever a match when a seeker doesn't catch a snitch thereby disavowing the need for the rest of the team) are part of the joy of the thing.  Even I was saying "Potter!" along with Draco Malfoy by the end.  Will I read the books now?  Not sure.  I can't quite decide if the thing I like about Harry Potter now is what's in the films only or the underlying mythology and if, as I suspect, the things I do love about the films would be spoilt by seeing what's been left out or changed.  We'll see.  Either way, I was sorry to get to the end and if JK and the rest decide to revisit the characters at some late stage, I'll be right there.  Now.

An extra dollop of hubris to go with my hubris ... on AI by Simon Wardley

First, I find it exceptional hubris to assume that we will create artificial intelligence before it emerges. However, even if we somehow accept that we're masterful enough to beat random accident to the race then the idea that we will be able to control it makes me shudder.

Any mathematical model (and that includes any computer program) is subject to Godel's Theorem of Incompleteness. Basically, we can't provably show a model is true within the confines of the model itself. What that means in plain English is that mistakes / bugs / unforeseen circumstances will happen. There is no way to create a control mechanism which will absolutely ensure that any AI doesn't misbehave any more than there is a way to create a provably secure system. Given enough time, something will go wrong in much the same way that given enough time, any system will be hacked.

The only sensible course of action is isolation. Which is why in security you don't try to make the unbreakable system, you accept it will be broken and minimise the consequences of this to the smallest risk vector possible. This is why systems like Bromium which use micro virtualisation make so much good sense and certainly a lot more than much of the rest of the security industry.

If you want to reduce the risk of AI then you have to reduce its interconnectedness i.e. you have to use isolation. But this is counter to the whole point of the internet and where everything is heading (e.g. IoT, mobile, use of ecosystems) where we attempt to make everything connected.

This whole paradox stems from the fact that industrialisation of one component enables the rapid creation of higher orders system which in turn evolve to more industrialised forms and the cycle repeats. Our entire history is one of creating ever more complex systems which enable us to reduce the entropy around us i.e. make order out of chaos. It's no different with biological systems (which are also driven by competition). See figure 1.

Figure 1 - Evolution and Entropy.

This constant process of change increases our energy consumption (assuming we consumed energy efficiently in the first place), enables us to deal with vastly more complex problems and threats to our existence but at the same time exposes us to ever great levels of reliance and vulnerability through the underlying components. It's why the underlying components need to be designed with design for failure in mind which also means isolation. So when you build something in the cloud, you rely on vast volumes of good enough components ideally spread across multiple zones and regions and even clouds (unless the cost of switching is too high).

But, that also unfortunately requires greater interconnectedness at higher order systems i.e. our virtual machines may be isolated across multiple zones and regions but our monitoring, configuration and control are integrated and connected across all of these. In much the same way that our redundant array of inexpensive disks (RAID) was controlled by software agents connected across all the disks. A major part of the benefit that industrialisation brings to us comes from this very interconnectedness but that interconnectedness also creates its own risk.

When it comes to artificial intelligence then forget being able to provide a provably verified, valid, secure and controlled mechanism to prevent something wayward happening. You can't. Ask Godel.

The only way to prevent catastrophic consequences in the long term is to isolate as much as possible  at this level of "intelligence" assuming we created it in the first place - which I doubt. But the very benefits it creates which includes protection against future threats (diseases, asteroid strike, climate control) comes from the interconnectedness that creates this threat.

There is no way around this problem and as I've said before, if we keep connecting up a 100 billion different things then eventually artificial intelligence (of a form we probably won't recognise) will emerge. We can't stop it and despite our best efforts then given enough time we will lose control of it. The only thing we can do is isolate the "intelligence" throughout the system but then who wants to do that? No-one. That's where the benefits are - we want one set of super smart intelligent network of things communicating with another or how else are we going to create the "paradise" of "any given Tuesday"?

At some point, we need to have that discussion on whether the benefits of interconnectedness outweighs the risks. It isn't a discussion about AI but fundamentally one on the speed of progress. We've already had one warning shot in the financial markets where the pursuit of competition created a complexity of interconnected components that we lost control of. I know some are convinced that we can create verified, valid, secure and controlled mechanism to prevent any future harm. Even if you don't agree with Godel who says you can't, we've already demonstrated how easy it is for whizz kids to fail.

This discussion on interconnectedness or as Tim O'Reilly would say "What is the machine we are creating?" is really one on our appetite for risk. Personally, I'm all gung ho and lets go for it. However, we need to have that wider discussion and with a lot hubris than we do today.

On the two forms of disruption by Simon Wardley

I've posted on this topic before but I thought it was worth re-iterating. There is not one but at least two different forms of disruption. Don't confuse them.

To explain the two types, it's best to use a map and then look at the characteristics of both types. In figure 1, I've provided a very simple map from a customer (the user) who has a need for a component activity (A) which in turn needs a component (B) which in turns needs component C and so on. Each component is shown as a node in the map with interfaces as links.

Figure 1 - a Basic Map.

For more information on :-
1. How to map, see an introduction to mapping.
2. Evolution, see mapping and the evolution axis.

Now, every component is evolving from the uncharted space to becoming more industrialised through the effects of supply and demand competition. Hence activity A evolves from A1 to A2 to A3 etc. Most of the time such evolution is sustaining i.e. it involves incremental improvements in the act e.g. a better phone.

However, there are two forms of non-sustaining and disruptive change that are possible. The first is the product to product substitution, shown as B1 to B2. The second is product to commodity (or utility) substitution, shown as C1 to C2. These two forms of disruption are very different but in both cases we will have inertia to the change (drawn as a black bar). 

I won't go through the various forms of inertia, there are at least 16 different types and you can read about them in this post on Inertia to change. What I want to focus on is the characteristics of these two forms of disruption. To begin with, I'll just simply list the differences in table 1 and explain afterwards.

Table 1 - Characteristics of the two forms of disruption.

Now both forms of disruption threaten past industries. Examples of this would be product to product substitution such as Apple iPhone vs RIM Blackberry or product to utility substitution such as Amazon EC2 vs IBM / HP / Dell server businesses.

However, both forms of disruption are NOT equally predictable. In the case of product to product substitution it is almost impossible to predict what, when and who. There's a set of complex reasons for this (beyond the scope of this post) but Christensen, Gartner, RIM and Nokia didn't get it utterly wrong on the iPhone because they're daft but instead because it's unpredictable. Equally, people who got it right were ... just lucky.

In the case of product to commodity substitution (e.g. product based computing such as servers evolving to utility computing) then this is moderately predictable in terms of what and when but not whom. Weak signals told us back in 2004 that the change was going to happen soon and we knew well in advance what the effects would be (e,g, efficiency, agility in building higher order systems, co-evolution of practices, new forms of data etc) but we just didn't know who was going to play the game. In terms of anticipation, the subject matter had been well explored from Douglas Parkhill's book on the Challenge of the Computer Utility, 1966 and onwards. Everybody in this space (e.g. all the hardware vendors) should have been aware of the oncoming storm. They should have all been able to anticipate and been well prepared. 

I need to emphasise that product to utility substitution gives you ample warning in advance, often ten years or more. However, this assumes you have reasonable levels of situational awareness and it's situational awareness that is the key to defence. If you exist in a firm which has little to none (the majority) then even that which you should anticipate comes as a complete shock. Even if you have the situational awareness necessary to see the oncoming storm then you'll still need to manage the inertia you have to change but since you've got lots of warning then you can prepare.

In the case of product to product substitution, well you can't anticipate. The best you can hope to do is notice the change through horizon scanning and only when that change is well upon you i.e. it's in play. There is no advance warning given with product to product substitution and yes you'll have inertia to the change. The key to defence is to have a highly adaptable culture and even then that's no guarantee.

When it comes to impact, then beyond disruption of past industries the impact of the two forms are different. Product to product substitution tends not to create new practices, it's not associated with new forms of organisation or rapid increases in data and new activities. However, product to commodity substitution is associated with rapid increases in data and new activities along with co-evolution of practice and new forms of organisation. 

When it comes to gameplay then with product to product substitution there is little option for positioning and your gameplay choices are limited e.g. acquire, copy or abandon the market (before it kills you). 

With product to commodity (+utility) substitution then because you have advance warning you can position yourself to take advantage of the known effects. There is ample scope for strategic play from being a first mover to building ecosystems to manipulation of the market through open means - assuming that you have good situational awareness and know what you're doing. A classic example of good play is Canonical (with Ubuntu) vs Red Hat and others and how Canonical took the cloud market with relative ease.

These two different forms of disruption are also associated with different economic states - one known as peace, one known as war - however that's outside the scope of this post. More on that can be found in this post on peace, war and wonder.

However, what I want to emphasise in this post is that there are at least two forms of disruption. If you assume there is only one then you get into bizarre arguments such as Lepore vs Christensen arguing over predictability or not.  Let us be clear both Lepore and Christensen are both right and wrong because one type of disruption is predictable whilst the other type isn't. This is a debate that can never be resolved until you realise there's more than one type. 

It's no different with commodification vs commoditisation or the abuse of innovation. It might make it simpler to use single words to cover multiple different things and hide the complexity of competition but if you do this then you'll repeatedly play the wrong games or try to solve the issue using the wrong techniques.

Oh, but am I really saying that we can anticipate one form of disruption, the shift from product to commodity (+utility)? YES! More than this, we can say roughly what is going to happen from the disruption of past industries stuck behind inertia barriers to rapid increases in new activities built on the recently commoditised component to co-evolution of practice to new forms of data to organisational change.

Actually, we can say an awful lot about this form of change. Hence we already know that as Virtual Reality systems head towards more of a commodity then we'll see an explosion of higher order systems built upon this and fundamental changes to practice (i.e. co-evolution) in related industries. Think devops for architecture but this time physical architecture i.e. from town planning to civil, mechanical and heavy engineering.

But can we really say when? Yes! Well, more accurately we can do pretty good job of when using weak signals. These major points of predictable disruption (which take about 10-15 years to work through) we call 'wars' and I've provided a list of them in figure 2.

Figure 2 - Wars

So we know (actually, we've known for some time) that Big Data is likely to go through a war over the next 10-15 years with industrialisation to more utility services (e.g. MSFT's machine learning, GooG's big table or AMZN's elastic map reduce) and many product vendors suffering from inertia will be taken out during the process. In fact, some of the smarter vendors seem to be gearing up for this hence Pivotal's Open Data Platform play. Others, that are not quite so savvy don't seem to have got the memo. Can we work out who will succeed in the 'war'? Fairly quickly once it has all kicked off, so with Big Data we already know who the likely winners and losers will be.

Other 'wars' such as industrialisation of the IoT space to more commodity components are quite far away. Even with this, we've a good idea of what and when just not who. Still between now and when this 'war' of predictable product to commodity disruption kicks off in the IoT space, there will be lots of unpredictable product to product disruption.

Now when it comes to the old 'chestnut' of 'you should disrupt yourself' then let us be clear. You should absolutely disrupt yourself by being the first mover from product to commodity (+utility). This is a change you can anticipate and position yourself to take advantage of. Timing is important because if you go too early then it's easy to undermine a viable product business when a utility business isn't suitable (i.e. the act isn't widespread and well defined enough).

However, when it comes to product to product substitution then you have no idea - i.e. zero - of what is going to disrupt. Saying 'you should disrupt yourself' is tantamount to saying 'you should own a crystal ball'. It's gibberish. The best you can do is hope you discover the change going on (through horizon scanning), recognise it for a disruptive change (no easy thing) and adapt quickly through a flexible culture against the huge inertia that you'll have. Also, you'll need to do this all within a short period of time. Far from easy.

And that's the point of this post. There are two forms of disruption. The way you deal with them is different. Don't confuse them.

Oscars 2015. Reaction. by Feeling Listless

Film The short version:

Here's the long version.

As has been the case since Sky co-opted the legal UK rights, I didn't stay up for the Oscars this year (also because I wanted to sleep what with being in the middle of a working week) (it's complicated).  Having seen the results, I'm not entirely unhappy.  As I've said above, having not seen Birdman I can't make the comparison but just like Gravity and Inception and There Will Be Blood and countless others in the past this feels like the Academy choosing the safe option and not rewarding the film that it really should.

Last year I ended up making a similarly cautious statement about Gravity and then when I saw 12 Years a Slave found a relatively conventional film not necessarily doing anything I hadn't seen before and certainly not potentially changing the way films are made.  Perhaps when I've watched Birdman with its "single take", timed explosions, cast full of people I like and a director I've admired in the past, I will say fair enough.  But Boyhood, is, well Boyhood, over a decade in the making with all kinds of potential artistic jeopardy.

Anyway, here's how I did. Probably easier to list what I got right.

Patricia Arquette for best actress.
Ida for best film not in the English language
Big Hero 6 for best animated film
Grand Budapest Hotel for score, costume, production design, makeup and hairstyling

Wanted Boyhood to sweep. Didn't. But I did predict The GBH to do well in the craft categories, which is something.  Perhaps once I've seen everything, I'll be able to judge better...

February 22, 2015

Escape from the planet of the Ice Giants by Charlie Stross

I’m back home and mostly recovered from the jet lag, and according to the doctors I shouldn’t lose too many fingers from frostbite. (I exaggerate, but only a little: as I just spent three weeks in New England—specifically in New York and Boston—my cold weather gear got a bit of use. I mean, only about a metre of snow fell while I was there, and the MBTA only shut down due to a weather emergency twice: by the end of the trip we were making uneasy jokes about Fimbulwinter.)

Along the way I had plenty of meetings and I have some publishing news.

For one thing, I sold a short story (my first in a few years) to the MIT Technology Review. (It’ll be published in their fiction/futures issue, later this year.) And for another thing, “Accelerando” is finally getting a French translation; it’s due to be published by Editions Piranha on April 3rd. Oh, and of course “The Annihilation Score” is coming out for the first time in the UK and USA in the first week of July—that’s the sixth Laundry Files novel.

But the real news is that the trilogy-shaped-object I’ve been gestating at Tor for the past couple of years finally has a publication date and is slouching towards your bookshelves. I say “trilogy shaped object” because “Empire Games” is a single story spanning three books: they’re coming out at three month intervals, starting with “Dark State” in April 2016, to be followed by “Black Rain” and “Invisible Sun”. It’s set in the same multiverse as my earlier Merchant Princes series, although you don’t have to read the earlier series first; it’s about the failure modes of surveillance states and revolutions, the bizarre tendency of bureaucratic organizations to find new purposes for themselves long after their original purpose goes away, and how civilizations deal with existential threats. (Oh, and it has spies, a princess, a space battleship, and an alien invasion—just in case you thought I’d gone totally mundane …)

And to round things off, summer 2016 should also see the publication of “The Nightmare Stacks”, Laundry Files book seven. Because I love you so much that I’ve been writing one of them a year for a while (although I plan to take a year off after this one so I can do something different—every book I’ve written since 2007 has been in-series with something I wrote before then, and I have this itchy urge to surprise you).

So, that’s a four-book year coming up. And maybe there’ll be some short fiction on top. Finally all the hard work I did in 2013-14 is bearing fruit!

The Unexamined Life. by Feeling Listless

TV Here's a short Q&A which took place in 2012 at the BFI after a screen on The Hollow Crown (series one!) in which Richard Eyre talks to Sam Mendes about the difficulties in writing the script and Simon Russell Beale on the process of acting.  Much of it is as you might expect until the closing moments.  Well, just watch.

February 21, 2015

All for the want of a telephone call ... by Simon Wardley

Many many moons ago I used to run security for a retail company. It was an interesting time, some fairly thorny problems but the biggest issue I faced was the culture that had established in the company. The cause of the issue was policy mixed with rumour.

Years before I arrived, one employee had managed to clock up a moderate telephone bill due to personal calls. Someone had noticed. They had then gone on to estimate a total cost to the company of people making personal phone calls. A policy was introduced "no personal calls to be made at work".

It didn't take long for the rumour mill to start. Before long a common rumour was that one employee working late one night had called home to say he was going to be late. He was fired the next day. Of course, none of this was true but that didn't matter. 

A culture of fear had been established in the company and with all such cultures it grew and fed on rumour. It didn't take much for people to leap to a conclusion that security was monitoring all phone calls. The stories becoming ever more outlandish.

By the time I arrived, the culture of fear around security was well established. No-one wanted to speak to security or point out if something was wrong because of a fear of the consequences. Stories that security would investigate why you were looking at something and who you were talking to were widespread. The common attitude was if something looked amiss, keep your mouth shut.

All of this derived from that earlier treatment of a minor matter that could have been so much better dealt with by simply asking the employee to refrain from making so many personal calls. I've generally found that whilst people make mistakes or do foolish things occasionally that these are exceptions and treating them like adults is the way forward. Creating policies to deal with these exceptions almost always results in treating the general population as though they're not adult and the consequences of this are negative.

Certainly in places with high levels of mistrust then people want to know "their boundaries" but this is a sign of mistrust and a culture that is not healthy rather than anything positive. In such environments it can be extremely difficult to rebuild trust and the policies and rumours that surround them will counter you at every turn.

Why do I mention this? Well, be careful with your policies in an organisation. Straying away from the mantra of "treating people like adults so they behave like adults" by introducing policies to cover the exception is a sign of weak management and will come back to bite you in the long term. A policy should be a last resort and not what you should immediately dive into.

Serial and the Podcast Explosion. by Feeling Listless

Journalism The New Yorker points me towards this event in which ...

"a week before he died, the Times media columnist David Carr moderated a panel at the New School called “Serial and the Podcast Explosion,” the first event in a series presented by the university’s newest major, Journalism + Design. Carr, bundled up in a fleece jacket, leaned back in his chair and held a mic to his face at an angle that suggested he was about to do some freestyling. “I’m here as your potted plant for the evening,” he said. The panelists beside him were the stars of the podcasting world: Sarah Koenig (“Serial”), Alex Blumberg (“StartUp”), Alix Spiegel (“Invisibilia”), and Benjamen Walker (“The Theory of Everything”).
Can't wait to watch it. Mainly posting it here as a nudge. Though mainly transcript highlights, the New Yorker piece also offers useful links to the work of the participants.

The Crypto Wars Will Have No Winners by Albert Wenger

Keeping this short because I am on family vacation but there have been two stories this past week that we should all take note off. First, there was the revelation that the NSA had installed spying software deeply in the firmware of hard drives. Then, details about the breach of SIM card maker Gemalto were published that show how GCHQ and NSA obtained encryption keys for cell phones.

Yet we keep insisting that somehow these crypto wars can be won. That somehow we can build a “trusted” computing platform and do so in a non-dystopian fashion. What will it take for people to abandon this fool’s errand? How long will we continue down this spy-vs-spy path that is pitting the people against the government with ever more resources expended?

We cannot have perfect individual privacy while also having institutional transparency. The two are at fundamentally at odds with each other. We need to embrace openness as individuals (starting with activists) and then push it onto and into the institutions. To make this succeed we need to work on protecting people more than protecting data.

Flipping Pancakes: Making and Destroying Galaxy Disks by Astrobites

A sample of galaxies

Fig 1 – Galaxies come in countless shapes, sizes, and colors. The goal of the study of “Galaxy Evolution” is to understand what causes this brilliant diversity. Images from Galaxy Zoo.

There is a wide variety of galaxies throughout our universe — see the figure to the right. Some galaxies are very large, with a huge number of stars in them. Others could have just as many stars, but are crowded together in a much smaller space. They also may vary in their morphology: whether they are flat disks (like the Milky Way) or round spheroids. A major goal in the study of galaxies is to understand where this galaxy diversity comes from: have they always been so different, or do these differences develop as they evolve?

Because galaxies change over many billions of years (a billion years is often abbreviated a Gyr), it is quite challenging for astronomers to infer how they evolve from observations alone. To study how galaxies may grow, the authors of today’s paper used a cosmological simulation (similar to the one discussed in this Astrobite). The “Horizon-AGN” simulation modeled a miniature universe filled with over 150,000 galaxies, observing how they are created and interact with one another over more than four billion years. In particular, they studied the evolution of galaxies between 12 and 8 billion years ago, when observations tell us galaxies were undergoing their largest growth spurts.

The authors consider two particular mechanisms through which galaxies change. Galaxies will slowly add loose material — such as gas and stray stars — which is captured by gravity: this is called “smooth accretion“. Galaxies will also occasionally collide and “merge” with their neighbors: these are “major mergers” if both galaxies are around the same size and “minor mergers” if one is much smaller than the other. This paper reports the group’s findings on how smooth accretion and mergers affect three important aspects of galaxy formation:

  • How do galaxies gain new stars (increase their mass)?
  • How do they grow (or shrink) in physical size?
  • How do their morphologies change?

The researchers first had to create a merger tree for the simulated galaxies. At several dozen snapshots during the run, they recorded a list of all galaxies currently in the simulation. Each galaxy in the list is then connected to its progenitor galaxies in the previous snapshot. If a galaxy evolved through mergers, it may have several progenitors. One which only grew by smooth accretion would have only one progenitor, and a newly formed galaxy would have none. This process generates a family history, where each final galaxy has information on how much it grew by smooth accretion, by minor mergers, and by major mergers.

Figure 2 - The growth of galactic radii, relative to the percentage change in galaxy mass. The amount of mass added is not nearly as important as how it is added: smooth accretion causes very little change in radius compared to the same amount of mass added via a merger. Fig. 6 from Welker et al. 2015

Figure 2 – The growth of galactic radii (y-axis), relative to the percentage change in galaxy mass (x-axis). Adding a given amount of mass via smooth accretion (red/green) causes very little change in radius. Instead, adding that much mass through a merger (blue) causes the radius to increase dramatically. Fig. 6 from Welker et al. 2015

The authors found that mergers and smooth accretion were each responsible for about 50% of the growth in the mass of galaxies. Yet the effects on their radial sizes were quite different, as shown in the figure to the left. When galaxies grew via smooth accretion, their radii changed only slightly. Yet a merger which added the same amount of mass usually resulted in a much more dramatic size increase.

Most importantly of all, the mechanism for adding material also affected the galactic morphology. Galaxies which underwent a major merger (or many minor mergers) became more spherical. If a galaxy already had formed a disk, it was disrupted by the violent process or merging. On the other hand, galaxies which accreted smoothly gradually became more disk-shaped. Gas is mostly pulled along a particular axis, causing the galaxy to spin up and form a disk. Even initially spheroidal galaxies became more disk-like if they only grew by smooth accretion.

With the power of these cosmological simulations, astronomers are able to trace the evolution of galaxies in ways that cannot be accomplished with observations alone. They suggest that the variety of galaxy morphologies and sizes can be connected to their history of accretion and mergers. Spiral galaxies have likely undergone fewer significant mergers than their spheroidal counterparts. The idea that elliptical galaxies are formed from major mergers also agrees with the longstanding theory of “hierarchical galaxy formation” (see this Astrobite).

Cosmological simulations certainly do not hold all the answers — many approximations are made in order to simplify the calculations. For instance, only a few thousand particles are used to model a “galaxy” that would have billions of stars in reality. Yet these simulations, rough though they may be, are a powerful tool to validate galaxy evolution models, in conjunction with deep observations of these galaxies, as viewed from billions of light years away.

Curiosity update, sols 864-895: Drilling at Pink Cliffs by The Planetary Society

Curiosity's second drilling campaign at the foot of Mount Sharp is complete. The rover spent about a month near Pink Cliffs, an area at the base of the Pahrump Hills outcrop, drilling and documenting a site named Mojave, where lighter-colored crystals were scattered through a very fine-grained rock.

How LightSail Holds Its Place in Space by The Planetary Society

There are few systems aboard a spacecraft more important than attitude control. This infographic shows how LightSail holds its place in space.

February 20, 2015

On open source, gameplay and cloud by Simon Wardley

Back in 2005, Fotango launched the world's first Platform as a Service (known as Zimki) though we called it Framework as a Service in those days (consultants hadn't yet re-invented the term). It provided basic component services (for storage, messaging, templates, billing etc), a development environment using isolated containers, a single language - Javascript - which was used to write applications both front and back-end plus exposure of the entire environment through APIs. 

Zimki was based upon user need and the idea of removing all the unnecessary tasks behind development (known as yak shaving). It grew very rapidly and then it was shutdown in its prime as the parent company was advised that this was not the future. Today, Cloud Foundry follows identically the same path and with much success. 

The lesson of this story was "Don't listen to big name strategy consultancy firms, they know less than you do" - unfortunately this is a lesson which we continuously fail to learn despite my best efforts.

Fotango was a very profitable company at the time and had it continued the path then there was every chance that Canon would have ended up a major player in Cloud Computing. The upside of this story is that I went on to help nudge Canonical into the cloud in 2008, wrote the "Better for less" paper in 2009/2010 which had a minor influence in helping nudge others and without this, I probably would never have got around to teaching other organisations how to map. This has started to turn out to be very useful indeed, especially in getting organisations to think strategically and remove their dependence upon overpriced strategy consultants. You can guess, I don't like strategy consultancy firms and the endless meme copying they encourage.

The most interesting part of Zimki was the play. It was based upon an understanding of the landscape (through mapping) and use of this situational awareness to navigate a future path. 

The key elements of the play were ...

1) Build a highly industrialised platform with component services to remove all 'yak shaving' involved in the activity e.g. in developing an application. We used to have 'Pre Shaved Yak' T-Shirts to help emphasise this point. The underlying elements of the platform were used in many of Fotango services which themselves had millions of users. The use of component services was based upon limitation of choice, an essential ingredient for a platform unless you want to create a sprawl generator. To give you an idea of speed, the platform was so advanced in 2006 that you could build from scratch and release relatively complex systems in a single day. Nothing came close.

2) Expose all elements of the platform through APIs. The entire user interface of Zimki communicated through the same publicly available APIs. This was essential in order to create a testing service which demonstrated that one installation of Zimki was the equivalent of another and  hence allow for portability between multiple installations. It's worth emphasising that there was a vast range of component services all exposed through APIs.

3) Open source the entire platform to enable competitors to provide a Zimki service. A key part of this was to use a trademarked image (the Zimki provider) to distinguish between community efforts and providers complying to the testing service (which ensured switching). The mix of open source, testing service and trademarked image was essential for creating a competitive marketplace without lock-in and avoiding a collective prisoner dilemma. We knew companies would have inertia to this change and they would attempt to apply product mentality to what was a utility world. The plan was to announce the open sourcing at OSCON in 2007, I had a keynote to discuss this but alas that plan was scuppered at the very last moment.

4) Use the open approach to further industrialise the space. Exploit any ecosystem building on top of the services to identify new opportunities and components for industrialisation. A key element of any platform is not just the ecosystem of companies building within the platform but the ecosystem of companies building on top of it by including the APIs in their products or services.

5) Co-opt anything of use. We had guessed in 2005 that someone would make an IaaS play (though back then we called it Hardware as a Service, those meme re-inventing consultants hadn't turned up yet). Turned out it was Amazon in 2006 and so we quickly co-opted it for the underlying infrastructure. With no-one else playing in this platform space, there was nothing else for us to really co-opt. We were first.

6) Build a business based upon a competitive market with operational efficiency rather than lock-in and feature differentiation. There were around 13 different business models we identified, we focused on a few and there was plenty to go around by building upon this idea of a 'small piece of a big pie'.

In the end, despite its growth, Zimki was closed down and the open sourcing stopped. I did however give a talk at OSCON in 2007 covering commoditisation (a rehash of an earlier 2006 talk), the creation of competitive markets and the potential for this. 

A couple of points to note ...
  • The Cloud Foundry model is straight out of the Zimki playbook and almost identical in every respect which is why I'm so delighted by their success. I know lots of the people there and I was in their office in Old Street last week (it's actually opposite the old Fotango offices) and so it was nice to "cross the road" after a decade and see a platform succeeding. The recent ODP (open data platform) announcement also follows the same play. This is all good and I'm a huge supporter.
  • The OpenStack effort made the critical errors of not dealing with the tendency of product vendors to create a collective prisoner dilemma and compounded this by failing to co-opt AWS. They were warned but were hell bent on differentiation for the flimsiest of reasons (future speculation on API copyright). They'll survive in niches but won't claim the crown they should have done by dominating the market.
  • OpenShift has failed to learn the essential lesson of limitation of choice. I'm sure some will adopt and learn the lessons of sprawl once again. I do tell RHT every year they should just adopt Cloud Foundry but c'est la vie.

Anyway, there's more to good gameplay than just open sourcing a code base. Keep an eye on Pivotal, they have good people there who know what they're doing.

Twi-mail. by Feeling Listless

About Every now and then, close as I am to my ratio limit, I use the really excellent ManageFlitter to have a clear out of my Twitter follows, to shed people who clearly don't use the service, never really did or tweet that often.

One of the problem is that that there's always the odd organisational feed which offers one tweet every couple of days but which still is always something pretty interesting.

The BBC Archive feed is good example, pointing followers to often otherwise underpublicised new clips on the BBC website or noting when an archive repeat will be on television. But sometimes whole days will pass between tweets.

I'd naturally unfollow a feed like that but I don't want to lose sight of them.

Here's what I've done. I've set up an If This Then That recipe.

IFTTT is a useful way of triggering a thing to do a thing when it's done a thing. Reader's Digest offers a useful explanation.

So, whenever @bbcarchive tweets, I get an email  with the content of the tweet therein and I've set up a filter at Gmail so that all of those twi-mails go into one place.  Which is really useful.

Unlike twi-mail which barely works as a word.  Sorry.

Cool Stars Have Magnetic Fields Too by Astrobites

  • Title: Molecules as magnetic probes of starspots
  • Authors: N. Afram and S. V. Berdyugina
  • First Author’s Institution: Kiepenheuer Institut für Sonnenphysik, Freiburg, Germany
  • Paper Status: Submitted to Astronomy & Astrophysics


    A very spotty Sun during an active period in 2003. Image credit: NASA/SOHO

If you want to make a room full of astronomers laugh, raise your hand after a talk and ask, “but what about magnetic fields?” For all their pervasiveness in astronomy—everything from planets to galaxies can be home to magnetic fields of varying shapes and strengths—relatively little is understood about magnetism in the cosmos. Consider, for example, the solar dynamo. (Dynamo is a fancy term for “thing that creates a magnetic field.”) We can all agree that the closest, best-studied star to Earth has complex magnetic fields which give rise to features like sunspots, but we do not understand the inner workings of our Sun’s dynamo.

While the Sun is an excellent starting point in a quest to understand magnetism, the authors of today’s paper want more. The Sun is but one star, and we know that other stars have dynamos. Properly characterizing stellar magnetic fields is important for exoplanet studies, among other things, because exoplanet observations are hugely affected by the presence of starspots. One way to measure stellar magnetic fields is with the Zeeman effect. Strong magnetic fields affect the spectra of stars, causing a single absorption feature to split into several components. However, this effect is most easily seen in hot stars, yet the cosmos is littered with cooler stars: G dwarfs like our Sun, and even more cooler-still K and M dwarfs.

Spectropolarimetry to the rescue

Today’s paper looks at something only relatively cool stars can have in their atmospheres: molecules. Specifically, the authors investigate how molecular absorption lines change in starspots as a function of magnetic field strength. This offers another advantage over the atomic Zeeman effect, because it measures the magnetic field only in starspots—definitive signatures of magnetism which are the only places certain molecules exist—instead of a “global” magnetic field that could result from several strong components canceling each other out.

The authors consider four molecules (MgH, TiO, CaH, and FeH) in three kinds of stars (G, K, and M dwarfs) that can be observed in three ways. One way is “regular” spectroscopy: measuring the strength of a molecular absorption feature. The other two ways use spectropolarimetry to measure the Stokes V and Q parameters.* Because different flavors of polarized light are sensitive to magnetic fields, astronomers can take advantage of this to observe how absorption lines change in the presence of magnetic fields. An example for an MgH absorption feature in K dwarf star is shown below.


Magnetic starspots change how MgH absorption would look in a K dwarf star. The solid, dashed, and dotted lines show increasingly higher spot coverage in the left column and increasingly strong magnetic fields in the right column. The top panels are the absorption feature’s overall intensity, the middle panels are its Stokes V parameter (measuring circular polarization), and the bottom panels are its Stokes Q parameter (measuring linear polarization).

Molecular alphabet soup

Of course, not all molecules, starspots, or magnetic fields are created equal. By modeling how different combinations of these would look on different kinds of stars, the authors make clear predictions to guide future observations. For example, are you most interested in Sun-like G dwarfs? Then you should probably focus on MgH and FeH absorption features. Or perhaps your instrument setup is better-suited to observing CaH and TiO. In that case, you might consider studying cooler M dwarfs and adjust your exposure times accordingly. The figure below summarizes the paper’s predictions for which Stokes V (circular polarization, left panel) or Q (linear polarization, right panel) signals are likely to show up for each molecule in a variety of cool, spotted stars.


Observers should find more signs of magnetic activity in some situations than others. Both Stokes V (left) and Stokes Q (right) signals are predicted to be strongest in CaH and TiO molecules for M dwarf stars, while MgH and FeH are a better bet for hotter G dwarfs.

These predictions pave the way for careful observations of magnetic starspots, because now observers can select targets and exposure times more efficiently. The Sun isn’t the only cool star in town with a detectable magnetic field, which brings us one step closer to unlocking the mysteries of stellar dynamos.


*Section 2.1.3 of this article introduces the Stokes parameters in a friendly astronomical context.

Why We Write to Congress by The Planetary Society

It's time to write to Congress in support of planetary exploration. Why? Because it works.

Our Global Volunteers: February 2015 Update by The Planetary Society

The Planetary Society has amazing volunteers doing outreach work around the globe. Check out what they've been up to recently!

February 19, 2015

Birth Chart. by Feeling Listless

Music The Official Chart Company has unveiled a new website format which includes the ability to quickly find a particular chart from the archive. Here then, is the top ten from when I was born:

Which isn't awful and has a few tracks I've even heard of even if only in cover versions.  But it's also notably all blokes.  The first female singer is down at number 22, Olivia Newton-John.  I wonder to what we can attribute this.  You can also search for an artist's chart history.

Crash course in exoplanet observations with the James Webb Space Telescope by Astrobites

Image of James Webb Space Telescope: Credit: NASA

Image of James Webb Space Telescope: Credit: NASA

And just like that, it’s now 2015. Three years before NASA’s James Webb Space Telescope(JWST) enters an orbit 930,000 miles away from Earth. JWST is the successor to the Hubble Space Telescope but is almost 4 times the price. Putting two and two together, we have three years to figure out how to most effectively utilize this 9 billion dollar mission… No pressure. But what does this entail? There are three aspects to mission preparation: 1) test as you fly, fly as you test, 2) learn how to effectively use your instruments, 3) pray that everyone was paying attention in 5th grade when we learned SI -> Metric unit conversion. Let’s assume (3) is true and if you want video proof that NASA engineers are diligently testing each tiny part of the JWST instrumentation check out the webinar “Behind the Webb”! So that leaves us with 2) learn how to effectively use your instruments. Barstow et al. recognize that star spots, instrument systematics, and “stitching” (see below), might affect the exoplanet science return. In order to investigate these effects, an intensive set of models is needed. Barstow et al.’s procedure is as follows:

  1. Simulate 4 test case planet spectra (Hot Jupiter, Hot Neptune, Warm Neptune, Earth)
  2. Simulate the planet’s host star with star spots
  3. Simulate in transit and out of transit spectra
  4. Simulate what the in and out of transit would look like through the eyes of JWST
  5. Pretend you don’t know anything about the planet you simulated and use a statistical analysis (also called reverse modeling) to see if you can deduce any planetary parameters

In the context of transmission spectroscopy, the planetary parameters that one would hope to deduce are the gas abundances (in this case: atmospheric volume mixing ratios of H2O, CO2, CO, CH4, H2/He) and a temperature-pressure (T-P) profile. Gas abundances tell us what kind of atmosphere the planet has. Is it like Mars with loads of CO2, or like Earth with oxygen? T-P profiles tell us what is going with the climate. Is it like Earth where I can go outside and tan, or like Venus where I can go outside and cook a pizza on the floor in 9 seconds. But before we discuss reverse modeling, it’s important to understand our sources of noise: sun spots, systematics and stitching.

Star Spots

Let’s say I am observing planet X around star YZ. I want to observe planet X for 30 minutes before it transits, an hour in transit and 30 minutes post transit. This is equivalent to saying I want to observe star YZ for a total of 2 hours. Because my star is bright, I take a series of short exposures throughout the 2-hour transit and end up with several out of transit spectra (star only) and several in transit spectra (star plus planet). If I subtract all the out of transit data from the in transit data I can get out the planet spectrum: (Star + Planet) – Star = Planet. But, nearly all stars vary in brightness due to star spots (small regions of cooler temperatures which create brightness fluctuations). Meaning, we might end up with something like: (Star + Planet) – (Star + Star Spots) = Planet – Star spots. So how accurately can JWST observe exoplanets if our star is rapidly varying? Will this effect dominate our observations?

Systematics and Stitching

JWST is the first space explorer with instruments optimized for exoplanet characterization (not detection). Although, Hubble and Spitzer were tremendously successful for exoplanets, it’s not what they were designed to do. JWST has four instruments: NIRSpec, NIRISS, MIRI and NIRCam. All four will provide groundbreaking science for exoplanets but for the sake of a simple discussion let’s focus on NIRSpec and MIRI. NIRSpec, the Near Infrared Spectrometer, is the only spectrometer onboard that covers the 0.6- 5-micron region in a single shot. So imagine you are taking a panorama. NIRSpec would allow you to point and take the entire image in one click. NIRCam and NIRISS would require you to take about 3 images and then later stitch them together to create your image. Each time you introduce stitching, you introduce a certain degree of uncertainty into final spectra. Using NIRSpec will decrease this uncertainty.

MIRI covers the region from 5-12 microns in two observations. Using a combination of NIRSpec and MIRI would allow us to cover an unheard of wavelength space by stitching together 3 images. Our task is becoming increasingly complicated because the object we are observing (the star) could be constantly changing due to star spots. Going back to the panorama example, this would be as if you were trying to stitch together an image of a large football game. The players are constantly moving, so how will you know how to line up the image correctly? With these sources of error in mind, Barstow et al. discuss what exoplanet science we can expect from JWST.

Science Results

Barstow et al. looked at four case planets but here, let’s just look at the two extreme case studies: 1) something we know we can characterize: a hot Neptune orbiting a cool star (M dwarf), and 2) something that will be very challenging: an Earth-like planet in the habitable zone of a cool star (M dwarf). First, let’s compare the secondary transit spectra (i.e. when the planet is about to pass behind the star). The spectra are plotted showing the ratio of the planet radius to stellar radius. When photons from a particular gas are emitting from the planet, the planet appears bigger, and we get a spike in the ratio (y-axis). This indicates the presence of a particular gas.


Left: secondary spectra of a Hot Neptune around an M dwarf, showing detectable features. Right: secondary spectra of an Earth-like planet around an M dwarf, also showing detectable features. Main point: hot Neptunes can be easily observed, Earth like planets are much harder.

Left: secondary spectra of a Hot Neptune around an M dwarf, showing detectable features. Right: secondary spectra of an Earth-like planet around an M dwarf, also showing detectable features. Main point: hot Neptunes can be easily observed, Earth like planets are much harder.

Notice that in the case of the hot Neptune, there is very little noise. This is because the planetary atmospheres of hot Neptunes are puffier, hotter and therefore, easier to see. You can easily spot features from several atmospheric gases: CH4, CO2, CO, and H2O. The Earth-like spectra is much noisier but we can still make out a CO2 and a O3 feature. Now compare the temperature-pressure profiles retrieved from both these cases. Or wait! It is not possible to retrieve a T-P profile for an Earth around M dwarf. There are just not enough photons to constrain anything. That being said, the T-P profile retrieved from the hot Neptune is beautiful! The black line is the profile that went into Step #1 and the colors are all the profiles that were backed out of their reverse modeling.

In the end, Barstow et al. demonstrate that despite star spots, systematics and stitching we will be able to completely constrain the atmospheres of hot Neptunes, hot Jupiters and even warm Neptunes. Before the Kepler Mission discovered 4,000+ planets, all we knew of exoplanets was that they existed. Post JWST we will have unveiled a whole slew of planets characteristics. Planets which don’t resemble anything in our Solar System. Furthermore, if we find a nearby planet around a cool star, Barstow et al. prove that it will challenging but not be impossible to deduce what may be contained within Earth-like planetary atmosphere.

Mapping Europa by The Planetary Society

Several global maps have been made of Europa, but amateur image processor Björn Jónsson felt they could be improved—so he decided to make a new one.

New Horizons spots Nix and Hydra circling Pluto and Charon by The Planetary Society

A series of images just sent to Earth from New Horizons clearly shows Pluto's moons Nix and Hydra orbiting the Pluto-Charon binary.

Space Robot Sad Trombone by Charlie Stross


I feel a great pathos for robots.

Not just any robots, mind. But explorer robots. Brave little space robots. Voyager and Venera and Curiosity and Beagle robots. Spirit and Opportunity robots, possibly even more than all the others.

I think, honestly, most people do. We personalize the brave little toasters. They have twitter accounts and show up in completely heartbreaking xkcd strips. We root for them, pull for them, and appreciate their triumphs, tribulations, and traumas.

Scientists are still learning new things from images of Jupiter taken by Voyager I in 1979, when I was eight years old.

Eight. Years. Old.

We made a robot the size of a car, and we fired it into space, and it's never coming home. It's going to zoom around out there for-functionally-ever. Someday, a squintillion years from now, when we're long gone, there's a tiny possibility that some other people might find it and stare at it and know that we once existed.

It's a gigantic "Fuck you," to the Drake Equation. It's futile and beautiful, and it matters desperately.

That emotion right there? That thing you just felt, if you are anything like me?

That was sense of wonder.

And that is also what the word "humbling" means.

Brave space robots literally make me misty. And it's not just because they serve as a proxy for the East African Plains Apes millions of miles away, at their controls. In fact, I think most of the time we forget that our speciesmates are back there (back here!) on Earth, fiddling with joysticks and flipping toggles. Or tapping away on keyboards and puzzling over ambiguous shadows in photographs.

We say, "Curiosity discovered--" after all. We even construct gender for her and her and her sister Martian rovers--they're female, a pack of brave, adventurous Girl Scouts out there earning merit badges and drilling in to rocks.

I may have shed a tiny tear when I stayed up way, way too late to 'watch' her land. I was certainly rooting for her with as much ferocity as I've ever rooted for a Bruce Willis character, and considerably more than I could muster for WALL-E. (That'll be my unpopular confession for this column.)

It's interesting to me that we can individually haul up this emotional connection, this strength of empathy, for a machine that--objectively speaking--is just a machine. Not a living creature with feelings and agency; nothing with an object position of its own. More than that, that that empathy is easy for us.

Collectively, we seem to have a hard time summoning that understanding, that complex imagining of the other, for beings who are far more similar to us than these brave space toasters. Who are separated only by a gene controlling pigmentation, or a religious or political belief structure. Possibly it's because brave little robots are so alien. We don't come with any installed stereotypes or unexamined prejudices, and they're not exactly competition. Maybe it's because robots don't have political opinions, or a convoluted and shared history of competition and oppression.

In any case, maybe it's a good sign.

If we can learn to care about robots, maybe we can learn to care about less alien but more strange creatures, such as each other.

February 18, 2015

My Favourite Film of 2008 by Feeling Listless

Film Seeing Charlie Kaufman's Synecdoche, New York at the Cornerhouse in Manchester was a profound experience.  Not so much the process of seeing the film, but rather seeing how the film has processed me since.  Even to this day, I'm not sure I've quiet come to terms with what happened, certainly not to a point in which I've been able to watch the dvd copy I bought full price not long afterwards as I could have done in preparing to write about it now.

Here's why.  At around that time I began cataloguing my dvds and as you might remember, contrary to all sense and after having probably watched High Fidelity a bit too closely, I decided this should be done in chronological order based on the year in which something is set.  Full details can be found here and if you're not shaking your head by the end then there's something wrong with you.

Little did I realise at the time just how profound a decision that would become because here I am in 2015 still cataloguing.  Every so often when I've collected enough of them together, bought dvds (removed from their amaray boxes and put in plastic wallets to save space) and whatnot, I'll pile them up, enter the details in Access, try my best to adjudicate which year they're set in, usually easier if the filmmaker has decided for me, then sort them across the boxes.

Potentially this can be quite therapeutic and educational.  But there's also a certain level of distress involved as I realise that this may never end.  Plus there's the idiotic early decision to not also bother to record the year of production so I can't also see the century of cinema or place all of the films produced in a given year together digitally without retrospectively going back.  Genuinely can't be bothered.

Like PSH's character in S-NY, I'm stuck in a cycle unable to stop because of all the work done across the previous decade but fearing I may have to stop for my own sanity because it didn't occur to me that I would be doing this for much of the past decade.  I keep imagining making huge decisions like separating the television and films and storing them separately or going through a process of "de-accessioning" and only keeping what's important.

Essentially, I need to be able to look at this shot ...

... and not think, "Yep, pretty much.  Looks like useful storage."

Accelerometers just might work by Goatchurch

The green is the raw plot of the accelerometer vector which was aligned with the crossbar of my bike on the ride. The red is the altitude, the yellow is my speed — clearly slower going up hill than going down as time advances from left to right. We stopped for a bit at the top of the hill.

This is smoothed with an exponential decay factor of 50/51 on a time sample rate of 0.1seconds, so a sort of 5 second time window.
This is applying the exponential smoothing filter backwards as well, which is a trick I heard about a few days ago. I haven’t worked out of the maths of it yet, but it looks good.
Here are some vertical lines showing periods of ascent and descent with the second white horizontal line denoting the overall average accelerometer reading that you can think of is approximating how much the bike cross bar is pointing up or pointing down from the horizontal. I can convince myself that it is negative on the uphills and positive on the downhills where it is tending to point more in the direction of gravity.
Here’s a zoomed-in section of where we peddled down the hill and then heaved our way back up the other side. Because the rates of descent and ascent are about the same it means the slope down must have been shallower as I don’t peddle up hills very fast.
Unfortunately I’m not competent enough to overlay this on a map to see these places on the contour lines, and I don’t have a bike wheel trip magnet to measure distance travelled properly.

Anyway, it’s not really for my bike; it’s for putting on my hang-glider. The bike is just a good way to test things till I can get out flying again.

The Pen by Dan Catt

The Pen

I've been asked about my pen (for reals) a couple of times, so I thought I'd write a blog post about it. It's a Tombow Zoom 707 Ballpoint Pen (amazon UK/US), it cost £28 and I bought it for myself as a Christmas present.

I keep two Field Notes notebooks in my pocket, at night I take them out and put them on the bedside table. My life is dense, not hectic, not crazy busy, just every moment is filled. We have three kids, we home educate, the start-up I'm involved in is blowing up, I try to swim, I try to run, I'm learning the bass, I try and put together a podcast that takes an age, sometimes I even try to write a blog post or two. In all of that there's hardly any time to do other stuff, although that doesn't stop me thinking about other stuff. That other stuff goes down in one of the two notebooks.

The Pen

When I think of something I often can't get to a laptop or my phone in time, I tried, the thoughts don't stay in my head long enough to survive the gauntlet of children asking me things on the way upstairs. If you've watched the film Memento it's like that scene where he's looking around for a pen to write the thing down before he forgets it. I decided I needed notebooks and a pen with me at all times.

I think it's the most I've ever spent on a pen.

Before this I used the Field Notes pen that came with the notebooks. It's a good pen, feels nice to hold, flows well but the clip doesn't clip it in my pocket properly. I can't slide it into my jeans without having to put a fingernail round the back of the clip to make sure it clips properly. When I sit down the pen didn't stay in the same place.

It was all kinds of wrong.

The Zoom 707 slides into the pocket right next to the seam, and better still it stays there, after all I didn't want to lose a £28 pen. For the next few days I'd reach down and feel for the red ball on the clip, to know it was still there.

The Pen

Now it's a reflex action, I'll brush my hand past the side seam of my jeans and feel the pen's clip is still there. When I feel it I know I can't forget anything, life is speeding on but in that one moment I know I haven't left anything behind. If I need to remember something it's in the notebook, if it's in the notebook I don't need to remember it. I can clear my mind and move onto the next thing.

When I stop to take a moment, I can touch the red ball feel it against my fingertips and the memory of the last thing I wrote comes back to me. It's a shortcut to having to open the notebook and read it back.

It's a memory machine, a meditation device and an anchor.

The Pen

All this is fine and good, but how does it write?

It was all scratchy when I first used it, I thought I'd made a terrible mistake. Other pens I'd had took a word or two for the ink to get going, with this I wrote a page and it still didn't seem right. I just needed to give it more time, after a while of being in my pocket and being used the ink started flowing properly.

I was used to the thick slick blackness of the Field Notes pen, the Zoom is light and narrow. My handwriting is spidery and the pen suits it well.

It turns out, importantly, that I can now write more words per line, more thoughts per page with the new pen. It's a little thing but these notebooks aren't big so utilising more of the page is a good thing. I wasn't expecting it, so it's a bonus.

It writes fine and good.

That is the story of my new pen.

The Closest Encounter of a Stellar Kind by Astrobites

Title: The Closest Flyby of a Star to the Solar System

Authors: Eric E. Mamajek, Scott A. Barenfeld, Valentin D. Ivanov, Alexei Y. Kniazev, Petri Väisänen, Yuri Beletsky, Henri M. J. Boffin

First Author’s Institution: University of Rochester

Status: Accepted by ApJ Letters

A history of nemeses

Figure 1 from the paper showing the distributions of the times of closest approach and the distances of closest approach from the orbits calculated by integrating the motion of W0720 and the Sun in a Galactic gravitational potential.

Figure 1 from the paper showing the distributions of the times of closest approach and the distances of closest approach from the orbits calculated by integrating the motion of W0720 and the Sun in a Galactic gravitational potential. The blue line on the right shows the maximum semi-major axis for retrograde orbiting Oort Cloud comets (0.58 pc) while the line on the left indicates the edge of the dynamically-active inner edge of the Oort Cloud (0.10 pc).

The idea that catastrophic events on Earth could have been caused by other stars has been around since at least the 1980s, when two paleontologists at the University of Chicago, David Raup and Jack Sepkoski, noticed an apparent periodicity of 26 million years in Earth’s last ten major extinction events. Soon after, two papers (both published in the same issue of Nature) independently proposed that these mass extinctions could have been caused by an as-yet-unseeen binary companion to the Sun. This hypothetical star has since gained fame as “Nemesis” or the “Death Star.”

Unfortunately, our theorized stellar nemesis has never been detected, and a binary companion seems ever more unlikely to be the cause of a possible periodicity in Earth’s mass extinctions. Nevertheless, this hasn’t stopped astronomers from looking for other possible close encounters of the stellar kind. While we don’t necessarily expect periodic mass extinctions on Earth to be the result of a mysterious binary companion, it seems possible–even likely–that nearby stars could occasionally pass close enough by to perturb the Oort Cloud and send a shower of comets our way, some of which could contribute to extinctions on Earth.

The authors of today’s paper claim to have found the star with the closest known flyby to the solar system, WISE J072003.20-084651.2 (W0720), or “Scholz’s star”. Its trajectory is thought to have taken it to a staggeringly small (at least by astronomical scales) distance from the Sun of about 0.25 parsecs, or 52,000 AU (an AU is the name we give the distance between the Earth and the Sun). This would put W0720 just inside the outer Oort Cloud at its closest. For comparison, the closest known star to us, Proxima Centauri, is located a comfortable 1.3 parsecs away. Previous astrobites have also discussed possible effects of stellar flybys on the eccentricity of exoplanet orbits and the size of a star’s protoplanetary disk.

The authors became intrigued with W0720 when they noticed its unusual combination of low tangential velocity and proximity. Since objects that are closer to us tend to look like they’re moving faster, the fact that this star was both close and slowly-moving across the sky was a good indication that much of its velocity was radial–something that recent papers have confirmed. In that case, they were interested to see if it had or would eventually come close to the Sun and, if so, what effects it may have had on the number of comets that we see.


To study the star’s–which was recently discovered to actually be a binary–trajectory, they integrated the orbit of W0720 and the Sun with a Galactic gravitational potential and used the velocity data of the star to model orbits of the star around the Sun. They simulated the orbit of W0720 and the Sun ten thousand times, which allowed them to sample the space spanned by the uncertainties in their measurements (obtained from observations and the literature). From the resulting distribution of closest-pass distances, they found that W0720 would have come within 0.252 pc of the Sun about 70,000 years ago. To check their results, they also calculated these numbers using a simple linear trajectory (which ignores the gravitational potential of the Galaxy) and found that their numbers agreed to within 2.5%–unsurprising, given how recently W0720 passed by. Figure 1 from the paper shows the distribution of both the distance and time of closest approach using the more accurate integrated approach.

The authors also noticed that the nearest pass of W0720 actually corresponds with the location of the aphelia of some comets from the Oort Cloud. At first glance, this might see to indicate a connection between W0720’s visit and the comets, but Mamajek et al disregard this as a possibility since the orbital period of such comets would be on the order of 2 million years–meaning that they can’t have reached us yet. They also note that only the closest passes of W0720 in their simulations (a distance of 0.087 pc) actually brought it within the inner Oort Cloud. Furthermore, when they calculate the encounter-induced flux of comets from W0720, they find that it isn’t enough to be noticeable compared to the amount that would be generated by tidal effects of our Galaxy. This causes them to conclude that while W0720’s trajectory took it close by the Sun quite recently, it had a negligible effect on the amount of comets that we would see 2 million years from now.

But is W0720 really the closest? 

One last point remains, however, and that is the question of whether W0720 is really the star with the closest approach. A paper by C.A.L. Bailer-Jones published just last December named a different star, HIP 85605, as the closest approach to the Sun. Bailer-Jones estimated that HIP 85605 has a 90% chance of coming between 0.04 and 0.20 pc of the Sun, which puts it closer than W0720, and the authors address this point in the final section of their paper.

The measurements of HIP 85605 were based on data from the satellite Hipparcoswhich obtained parallax measurements of many nearby stars. However, as acknowledged in the earlier paper, it was possible that the astrometry for HIP 85605 was incorrect, which would make the star appear to be much closer than it actually is.  Mamajek et al investigate this by looking at HIP 85605’s color. A star’s color and spectrum can indicate what kind of star it is, and therefore how bright it should be. When they studied HIP 85605, they found that it had the color and spectrum of a K star, but that the distance obtained from Hipparcos’ parallax measurement would give it the luminosity of an M star (which is much dimmer). They resolve this discrepancy by suggesting that the Hipparcos parallax measurement is incorrect. This in turn puts HIP 85605 farther away than previously believed and gives it a different velocity than was calculated in Bailer-Jones’ paper. As a result, W0720 would beat out HIP 85605 to be the star with the closest approach.

For now at least, it seems like W0720 will remain the closest known flyby from the Sun. Even so, the authors point out parallax data from the newer Gaia instrument may very well allow us to detect other candidates (close by and with small tangential motion) for the our nearest stellar encounter. And so we await possibly more exciting discoveries in the future…

Planetary CubeSats Begin to Come of Age by The Planetary Society

Van Kane rounds up some recent planetary mission concepts based on CubeSat technology.

February 17, 2015

Fresh Air on New Yorker. by Feeling Listless

Journalism NPR's Fresh Air has a lengthy interview with David Remnick, the current editor of the New Yorker at it turns 90:

"What I inhaled at The New Yorker was a culture of attention. So for example, there had been - she's no longer with us anymore - there had been a kind of super copy editor named Eleanor Gould. And she would do, at some late stage of a piece's editing - and there are many layers to the editing - what was called a Gould proof. And she had been there for decades and decades. She had been there when Harold Ross was around. And these proofs taught you so much about repetition and indirection and all the muck that can enter bad prose if you aren't careful. This woman could've found a mistake in a stop sign. I have a copy of a proof in which she found four mistakes in a three-word sentence. I'm not kidding around."
It's pretty expansive and covers recent successes and failures, including around the lead up to the 00s Iraq War. There's a podcast here and a transcript if you want to skim.

Simple linear relations ruined by Goatchurch

Quick report on the misery of trying to get any good data out of these fancy good-for-nothing data sensors.

I did a bike ride on Sunday around Yorkshire, from Barbon to Dent, up the valley, down to the very fine Cafe Nova in Sedburg and then back to Kirby Lonsdale via a bridge over the river Lune where Becka and I went kayaking in January and got scared (it looked a bit tame in these water levels). This was the day after the day before where I broke my caving famine and did a nine hour Easegill traverse from Pippikin to Top Sink while the hard people (incl Becka) did the reverse route and went out Bye George to celebrate Tom’s birthday. (This was the same Tom who drove out to Austria with me last May so I could go hang-gliding when Becka stood me up to go on a caving holiday.) Hint: for my birthday I will not be going caving.

So, anyway, you’d think something as simple as the GPS-altitude and barometric readings would be somewhat related.

This is what the plot looks like of barometric pressure along the X-axis (zeroed at 990Mb) vs altitude, which ranges from 102m to 305m. Yellow is the first hour, where we went over the hill, and blue is the second hour peddling up the valley from Dent.

Not a great correlation. Here’s the same picture zoomed in. The squiggles are predominantly left and right accounting for the noise of the barometer readings.

Suppose I take a rolling average of sequences of 7m and plot the same here without all the noise, getting the yellow line.
Still pretty wobbly. The cyan is the plot of the barometric forumla which is:

101325*(1 – 2.25577e-5 * altitude)5.25588

This is near as damnit a straight line of slope -0.08488715682448557. Applying simple linear regression to the slope gives -0.08992119168062143, which is not a great match.

Maybe I ought to work out a way to do this calculation in run-time on the device itself to give a measure of how rubbish the altitude-barometer agreement is during operation so I don’t have to bring it back here and run these complicated python programs on the data.

Then I could see if it’s responsive to the mode of travel, eg bike vs walking up and down the hill.

The next correlation to look at from this data is tilt of the bike frame registered from the accelerometer vs the slope climb according to the GPS. I’ve got very little hope this will work, so have put it off. I’m already sure that the temperature vs altitude signal is completely lost in the noise, probably due to the proximity to the ground on which the sun was shining.

I hope to see something better if I ever get this thing in the air. Right now I’m 3D printing enclosures to grip on to the base bar and am gathering a desk full of lots of bits of useless bits of plastic. Got to push on and not be distracted.

February 16, 2015

Theatre on Television. Again. by Feeling Listless

TV Well now, what's this? BBC to demonstrate renewed commitment to prime time Arts programming and partnerships. Given the riches which have turned up on the iPlayer and BBC Arts website lately, I think it's going swell but let's see what this means in terms of the thing I'm otherwise most interested in, theatre:

"Later in the year, there will be new seasons on Poetry and Theatre. The Theatre festival includes Ian McKellen and Anthony Hopkins starring in a new adaptation of the drama The Dresser on BBC Two and new drama strand Dialogues, bringing together exceptional writing and acting talent to BBC Four.

"In a new partnership with the Arts Council of England, we’ll be working with theatres and theatre companies to explore new ways of making and broadcasting theatre on the BBC.

"And across the English Regions, we will be following 11 local theatres over the next six months as they tackle an array of challenges - on stage and off."
Plans from the specific to the vague as ever. Might have been useful to mention that The Dresser has a playwright, Ronald Harwood, and that it's not some random choice, The Dresser is about a touring theatre company. Oh and there's already a film version with a screenplay written by Harwood himself, so this isn't like plucking Sir Thomas More off the shelf and doing a version of that.  As ever, then, this looks like a commitment to theatre within certain limits.

The poetry season (yes, again) is bags more interesting and also features some classical:
"The poetry season will include a profile of the Poet Laureate Carol Ann Duffy; a special on Edmund Spenser’s The Faerie Queene; a drama adaptation of Simon Armitage’s long poem, Black Roses and Performance Poets In Their Own Words."
If all of this seems a touch mealy-mouthed, it looked with Malfi from the Globe like there was going to be a real transformation in the BBC's attitude to theatre.  But apart from some crumbs during Edinburgh nothing has changed that much. Drama on television is still new television work and literary adaptations. Anything filmed on a stage is an opera or ballet.  Far cry from the 70s when etc etc etc

On Evolution, Disruption and the Pace of Change by Simon Wardley

When people talk about disruption they often ignore that there is more than one type. You have the highly unpredictable form of product vs product substitution (e.g. Apple vs RIM) and then you have the highly predictable form such as product to utility substitution (e.g. Cloud). Confusing the two forms leads to very public arguments that disruption isn't predictable (Lepore) versus disruption is predictable (Christensen) - see New York Times, The Disruption Machine.

The answer to this is of course that they are both right, both can give examples to support their case and at the same time they're both wrong.  If you mix the two forms of disruption then you can always make a case that disruption appears to be both highly unpredictable and predictable. 

The second common mistake that I see is people confusing diffusion with evolution. The two are not the same and when it comes to 'crossing the chasm' then in the evolution of any single act there are many chasms to cross.

The third common mistake is to confuse commoditisation with commodification. This is such a basic error that it surprises me that it happens in this day and age. Still, it does.

The fourth common mistake is the abuse of the term innovation. This is so widespread that it doesn't surprise me many companies have little to no situational awareness and that magic thinking abounds. If you can't even distinguish this most basic of concepts into the various forms then you have little to no hope of understanding change.

Once you start mapping out environments then you quickly start to discover a vast array of common and repeatable patterns from componentisation to peace / war & wonder to the use of ecosystems. There's a vast array of tactical & strategic gameplay which is possible once you know the basics and contrary to popular belief, even changes in company age and the processes of how companies evolve are not in the realm of the mysterious & arcane. I've used this stuff for a decade to considerable effect from private companies to Governments.

The fifth common mistake is pace.

I recently posted a tongue in cheek pattern of how things change. I like to use a bit of humour to point to something with a bit of truth.

In the same way, I used my enterprise adoption graph to provide a bit of humour pointing to the issue of inertia.

However, whilst most people half heartedly agreed with elements of the pattern of change (just like they agreed with Enterprise Adoption Curve), there was equally disagreement with the timeline. The majority felt that the timeline was too long. 

People often forget that concepts can start a long time ago, e.g. contrary to popular ideas, 3D printing started in 1967. The problem is that we are currently undergoing many points of "war" within IT as an array of activities move from product to utility services. Whilst these "wars" are predictable to a greater degree, they are also highly disruptive to past industries. Multiple overlapping "wars" can give us a feeling that the pace of change is rapid.

I need to be clear, that I'm not saying  the underlying rate of change is constant. The underlying rate of change does accelerate due to industrialisation (i.e. commoditisation) of the means of communication (see postage stamp, printing press, telephone, internet). BUT it's too easy to view overlapping "wars" as some signal that the pace of change itself is much faster than it is. 

It still takes about 30-50 years for something to evolve from genesis to the point of industrialisation. However, come 2025-2030 with many overlapping wars (from IoT to Sensors as a Service to Immersive to Robotics to 3D printing to Genetics to Currency) then it'll feel way faster than that and way faster than it is today.

Bank Computer. by Feeling Listless

Film Having begun to watch my way through the Harry Potter films (I'm only halfway through Chamber for various reasons), I just had to watch the above Buzzfeed video which I won't spoil but is endlessly amusing not least because Hermione is clearly the most powerful wizard yet Potter's the most famous. I spent the entire duration trying to work out where I'd heard the voice over before. Sounds a bit like Roger Allam, but a glance towards the credits underneath indicates it's Kevan Brighting. Kevan Brighting the IMDb informs us played the Bank Computer in Doctor Who's Time Heist. Uncredited. You can't turn it off.

Is Diskmaker X taking forever to create your bootable OS X drive? by Zarino

My first Mac was a 2001 “Dual USB” iBook G3. Back then, Macs came with installer disks (CDs in the case of the iBook G3) and new releases of OS X would be sold, again as physical disks, for £79.

Time moves on though, and years ago Macs stopped coming with physical installer media. Call me old fashioned, but something about that scares me a bit.

I’ve previously talked about the importance of backing up your shit, and even shared some tips for setting up Time Machine on a Synology NAS, and selectively backing up to a USB drive with rsync.

Having your own physical installer media is just the next step in making sure, no matter what happens, you can get your Mac set up immediately, after disaster strikes.

I already have two USB drive installers, for OS X 10.7.2 and 10.8.4. But this weekend, I decided to create one for OS X 10.9 (Mavericks), and I hit a problem.

There didn’t seem to be much help out there on the interwebs, so here’s hoping Google will find this page next time someone in my position wonders why their OS X installer is taking ages to start.

“Command not found”

I was using Diskmaker X to create a bootable drive from an OS X Mavericks installer I’d downloaded from the Mac App Store months ago.

And whenever I ran it, Diskmaker X would hang on the following screen:

Diskmaker X error

It turns out, Diskmaker X was trying to show me an error message, but the super long directory path was hiding it. Here’s how it would have looked if I’d put the installer in the /Applications directory:

Diskmaker X “command not found”

The error says:

sudo: /Applications/Install OS X command not found

Odd. I reverted to running createinstallmedia myself, from the Terminal, to see whether Diskmaker X was the culprit:

sudo /Applications/Install\ OS\ X\ --volume /Volumes/Untitled --applicationpath /Applications/Install\ OS\ X\ --nointeraction

And I still got the same error.

Then I wondered whether the createinstallermedia file was actually executable. If you pass just a normal file to sudo (rather than an executable program) “command not found” is exactly the sort of cryptic error you’d expect. I checked and—lo and behold—my copy of createinstallermedia wasn’t executable after all.

Easily fixed:

sudo chmod +x /Applications/Install\ OS\ X\

Once chmod has made the file executable, Diskmaker X was happy again, and my OS X 10.9 installer drive was set up in about 25 minutes.

Diskmaker X

I have no idea why createinstallermedia wasn’t executable in my version of the installer–maybe it has something to do with me storing the installer on an external disk for the best part of a year, but at least it was a simple fix once I worked out what was going on.

February 14, 2015

You've Got Mailed. by Feeling Listless

Film Vanity Fair has an affectionate oral history of Norah Ephron and You've Got Mail, a film I've grown to love across time. Her attention to detail was extraordinary:

"The extra who is playing the florist [in the beginning of the film] is pregnant. We put a little pad in her tummy. And one of the things you will see later in the movie is when Meg is buying flowers at that florist, there’s a little sign in the window that says, “It’s a girl.”"
I wonder what the film must look like to later audiences, where the process of email amounts to wading through pages of messages from Amazon trying to sell you things and in my case PR emails for London galleries I'll never visit.  The closest we have now are Facebook messages and Twitter DMs, but neither, I suspect, prompt the florid paragraphs which we and the characters in the film used to write.

A Fresh Approach to Fundraising by The Planetary Society

We want you to know that we’ve been listening to you. Members have highlighted the number of fundraising appeals from The Society, and we agree that the number of requests should be streamlined.

An active comet, from a distance by The Planetary Society

Rosetta has closed to within 50 kilometers of Churyumov-Gerasimenko, on its way to a very close, 6-kilometer flyby of the comet tomorrow. To prepare for the flyby, Rosetta traveled much farther away, allowing it to snap these amazing photos of an increasingly active comet from a great distance.

February 13, 2015

Why big data won't improve business strategy for most companies. by Simon Wardley

Over the years, I've heard a lot of people talk about algorithmic business and how big data will improve business strategy. For most, I strongly suspect it won't. To explain why, I'm going to have say certain things that many will find uncomfortable. To begin with, I'm going to have to expand on my Chess in Business analogy.

I want you to imagine you live in a world where everyone plays Chess against everyone else. How good you are at Chess really matters, it determines your status and your wealth. Everyone plays everyone through a large computer grid. Ranks of Chess players are created and those that win are celebrated. Competition is rife.

The oddest part of this however is that no-one in this world has actually seen a Chessboard.

When you play Chess in this world, you do so through a control panel and it looks like this ...

Each player takes it in turn to press a piece. Each player is aware of what piece the other pressed. 

And so the game continues until someone wins or it is a draw. Unbeknownst to either player, there is actually a board and when they press a piece then a piece of that type is randomly selected. That piece is then moved on the board to a randomly selected position from all the legal moves that exist. White's opening move might be to select 'Pawn', one of White's eight pawns would then be moved one or two spaces forward. But all of this is hidden from the players, they don't know any of this exists, they have no concept there is a board ... all they see is the control panel and the sequence of presses.

However, within that sequence people will find 'favourable' patterns e.g. the best players seem to press the Queen as often as possible! People will come up with their own favourite combinations and try to learn the combinations of other winners! 

"When your opponent moves the Knight, you should counter with Queen, Queen, Rook" etc. 

People in this world would write books on secret knowledge such as "The art of the Bishop" or "The ten most popular sequences of Successful Winners". The more you aggregate data from different games, the more patterns will be proclaimed. Some through experience might even gain a vague notion of a landscape. Let us suppose one of those people - by luck, by accident or by whatever means - twigs that a landscape does exist and somehow finds a way to interact with it.

Imagine one day, you play that individual but they can see something quite remarkable ... the board. You start off planning to use your favourite opening combination of Pawn, Pawn, Knight, Bishop but before you know it, you've lost.

Obviously, this was just luck! But every time you play this player, you lose and you lose fast. You keep recording their sequences. They beat you in two moves - Pawn (b) and Queen (b) (known as Fool's Mate) or they beat you in four moves - Pawn (b), Queen(b), Bishop(b), Queen(b) (known as Scholar's mate) but you can't replicate their success even though you copy their sequences, it never works and you keep on losing.

People will start to seek all different forms of answers to explain what is happening. Maybe it's not just the sequence but the way that the player presses the button? Maybe it's timing? Maybe it's their attitude? They seem to be happy (because they're winning). Is happiness the secret to winning? All sorts of correlations and speculations will be jumped upon.

However there is no luck to this or special way of pressing buttons. The player controlling Black simply has far greater situational awareness. The problem for the player controlling White is they have no real understanding of the context of the game they are playing. White's control panel is just a shadow of the landscape and the sequence of moves lacks any positional information. When faced with a player who does understand the environment then no amount of large scale data analysis on combinations of sequences of presses through the control panel is going to help you.

I need to emphasis this point of understanding the landscape and having good situational awareness and so we'll now turn our attention to Themistocles. 

The battle of Thermopylae (the tale of the three hundred) and the clash between the Greeks and the mighty army of Xerxes has echoed throughout history as a lesson in the force multiplier effect of landscape. Themistocles devised a strategy whereby the Athenian navy would block the straits of Artemision forcing the Persians along the coastal road into the narrow pass of Thermopylae where a smaller force could be used to hold back the vastly greater Persian numbers. I've provided a map of this below. 

Now, Themistocles had options. He could defend around Athens or defend around Thebes but he chose to block the straits and exploit the landscape. Each of Themistocles' options represents a different WHERE on the map. In the same way that our enlightened Chess player has many WHEREs to move pieces on the board. By understanding the landscape you can make a choice based upon WHY one move is better than another. This is a key point to understand. 

WHY is a relative statement i.e. WHY here over there. However, to answer the question of WHY you need to first understand WHERE and that requires situational awareness.

Now imagine that Themistocles had turned up on the eve of battle and said - "I don't know WHERE we need to defend because I don't understand the landscape but don't worry, I've produced a SWOT diagram"

How confident would you feel?

Now in terms of combat, I'd hope you'd agree that a strategy based upon an understanding of the landscape is going to be vastly superior than a strategy that is derived from a SWOT. The question you need to ask yourself however, is what do we most commonly use in business?

The problem that exists in most businesses is that they have little or no situational awareness. Most executives are even unaware they can gain greater situational awareness. They are usually like the Chess Players using the Control Panel and oblivious to the existence of the board. I say oblivious because these people aren't daft, they just don't know it exists.

But how can we be sure this is happening?

Language and behaviour is often our biggest clue.  If you take our two Chess players (White with little or no situational awareness, Black with high levels) then you often see a marked difference in behaviour and language.

Those with high levels of situational awareness often talk about positioning. Their strategy is derived from WHERE and so they can clearly articulate WHY they are making a specific choice. They tend to use visual representation (the board, the map) to articulate encounters both present and past. They learn from those visualisations. They learn to anticipate others moves and prepare counters. Those around them can clearly articulate the strategy from the map. Those with situational awareness talk about the importance of WHERE - "we need to drive them into the pass", "we need to own this part of the market". They describe how strategy is derived from understanding the context, the environment and exploiting it to your favour. The strategy always adapt to that context.

Those with low levels of situational awareness often talk about the HOW, WHAT and WHEN of action (e.g. the combination of presses). They tend to focus on execution as the key. They poorly articulate the WHY having little or no understanding of potential WHEREs. They have little or no concept of positioning and they often look to 'copy' the moves of others - "Company X gained these benefits from implementing a digital first strategy, we should do the same". Any strategy they have is often gut feel, story telling, alchemy, copying, magic numbers and bereft of any means of visualising the playing field. Those around them often exhibit confusion when asked to describe the strategy in any precision beyond vague platitudes. Those who lack situational awareness often talk about the importance of WHY? They describe how strategy should be derived from your vision (i.e. general aspirations). They often look at you agog when you ask them "Can you show me a map of your competitive environment?"

Back in 2012, I interviewed 160 different high tech companies in Silicon Valley looking at their level of strategic play (specifically their situational awareness) versus their use of open (whether source, hardware, process, APIs or data) as a way of manipulating their environment. These companies were the most competitive of the competitive.

What the study showed was there was significant difference in the levels of strategic play and situational awareness between companies. When you add in market cap changes over the last seven years then those with high levels of strategic play had performed vastly better than those with low levels.

Now this was Silicon Valley and a select few. I've conducted various interviews since then and I've come to conclusion that less than 1% of companies have any mechanism of improving or visualising their environment. The vast majority suffer from little to no situational awareness but then they are competing in a world against others who also lack situational awareness.

This is probably why the majority of strategy documents contain a tyranny of action and why most companies seem to duplicate strategy memes and rely on backward causality. We should be digital first, cloud first, we need a strategy for big data, social media, cloud, insight from data, IoT ... yada yada. It's also undoubtably why companies get disrupted by predictable changes and are unable to overcome their inertia to the change.

At the same time, I've seen a select group of companies and parts of different Governments use mapping to remarkable effects and discovered others who have equivalent mental models and use this to devastate opponents. Now, don't believe we're anywhere near the pinnacle of situational awareness in business - far better methods / techniques and models will be discovered than those that exist today.

Which brings me back to the title. In the game of Chess above, yes you can use large scale data analytics to discover new patterns in the sequences of presses but this won't help you against the player with better situational awareness. The key is first to understand the board.

Now certainly in many organisations the use of analytics will help you improve your supply chain or understand user behaviour or marketing or loyalty programmes or operational performance or any number of areas in which we have some understanding of the environment. But business strategy itself operates usually in a near vacuum of situational awareness. For the vast majority then I've yet to see any real evidence to suggest that big data is going to improve "business strategy". There are a few and rare exceptions.

For most, it will become hunting for that magic sequence ... Pawn, Pawn ... what? I've lost again?

-- Addition 13 Feb 2015

I'm no fan of strategy consultancy in general because most of what I see lacks any form of situational awareness and much of the field is uncomfortably close to snake oil. During my time as a CEO, I had to wade through endless drivel before I started to realise what the problem was. I thought I'd give you my top tips on how to spot and react to a dud.

How to react to Strategy Consultants.

1) Focus on Why? As soon as someone says this, you should say "Why is a relative statement such as why here over there? How do we determine where?" - any bluster on this should make you point the way to the exit. Remember - strategy always starts with "where?"

2) You're asking the wrong question! This is like someone telling you that your using the wrong sequence - it's magic thinking, secret arts etc. Ask them "How do I know if I'm asking the right question?" - unless they can provide you with a repeatable method or if there is any bluster on this then you politely ask them to "close the door on your way out".

3) Our method works! This should just make you run a mile. At best a method can teach you how to understand the environment, learn to play and learn to manipulate it to your advantage. Any mechanism for improving situational awareness will be imperfect. Any "formula" for success is again - magic thinking.

4) Company X did this and they were successful. If this is a call to implement an activity then this is backward causality. They should show the competitive landscape, how Company X changed that landscape to their favour by the introduction of this act, whether this pattern is repeatable to your competitive landscape and what the underlying mechanisms of change are. If they can't do this then ask them to leave.

5) 67% of companies have an XYZ strategy. This is just meme copying. Call security.

6) The top ten habits of successful ... run a mile AND call security. This is no different from the "top ten sequences of successful players" and unless they can provide a very precise mechanism to understanding the impact of those habits (i.e. understanding the environment and its manipulation by) then it's just gibberish.

7) The future is ... uncertain. Unless they can demonstrate repeatable and measurable patterns, the limitations of the patterns and the fundamental causes of the patterns in a manner which enables you to see the environment then this is not anticipation but guesswork or pre-existing trends that are so obvious they are of no differential value. Either case, you should be shouting "Next!"

8) You should align your strategy with your vision! A vision is a set of generalised aspirations. A strategy should be derived from an understanding of the context and the environment. I find throwing heavy books usually helps get them out of the room quickly.

9) You should align your business and IT strategy! Chances are you don't actually have a business strategy (otherwise you wouldn't be asking someone to help you improve your situational awareness) because if you did have high levels of situational awareness then you'd know this is an artificial division. If you've got a lion hidden under the table, now is the time to let it loose or at least tell them you're about to let one loose.

10) The key is to focus on execution! At times like this it is useful to have another door in the office that leads to a room with a table, a sheet of paper on it and a false floor covering a huge tank of sharks with frigging lasers. Say "Through that door is a room with a piece of paper on the table. If you answer the question quickly I'll give you a $100M strategy consultancy gig". As they charge into the room and turn the paper over, the false floor should disappear and the sharks should have their fun. Oh, as for the question on the paper - it should be "Well done on a speedy execution ... in your next life ... do you think situational awareness might help?" 

Laugh test. by Feeling Listless

Film Find above the trailer for director Ann Fletcher's new comedy Hot Pursuit. Don't watch it though - it probably gives away most of the story which is that Reece Witherspoon plays a cop who has to courier mob witness Sofía Vergara across country.  In a normal world this would look like average fare but here we are, two female leads in a comedy directed by a woman, admittedly with two male credited screenwriters but nonetheless.  For some context, Fletcher's also a prolific choreographer, her past triumphs including working with Joss on Buffy's Once More With Feeling and three episodes of Firefly.

Vacation Reading by Albert Wenger

We will be going on family ski vacation, so there will likely be no new posts until week after next. In the meantime though here are the books that I am bringing with me:

  • The Peripheral by William Gibson. I am a huge William Gibson fan in particular Pattern Recognition and of course Neuromancer.

  • Sex at Dawn by Christoher Ryan and Cacilda Jetha. I am about halfway through this fascinating book about the earliest origins of human sexuality and their potential implications for our behaviors today. The book builds a strong case against the standard model that has been used to support the predominant conception of what marriage should be.

  • How Much is Enough? by Robert and Edward Skidelsky, which I just started at the recommendation of Joshua Foer. It starts out examining the famous Keynes essay on “Economic Possibilities for our Grandchildren" that has been much cited recently.

One of the things I love about ski vacations is that they offer so much reading time. I no longer feel the need to ski from lift opening to closing — now I am often happy to call it a day shortly after lunch. Looking forward to a great combination of outdoors, reading and time with family and friends.

Too Cool to follow the Local Trend: Galaxies in the Early Universe by Astrobites

Title: Star Formation Rate and Dynamical Mass of 108 Solar Mass Black Hole Host Galaxies at redshift 6
Authors: Chris J. Willott, Jacqueline Bergeron, and Alain Omont
First Author’s Institution: Herzberg Institute of Astrophysics

Black holes reside in the centers of most galaxies. Our Milky Way, for instance, hosts a 4×106 M black hole known as Sagittarius A. Black holes interact with their host galaxies through various feedback mechanisms and affect the properties of their host galaxies.

Black holes and Star Formation

Galaxies form stars at different rates; some are slower than the others. The rate at which galaxies create stars is known as the star formation rate or SFR. Black holes are thought to play a role in their host galaxies SFR, primarily through high-energy jets they emit which then send shock waves through their surrounding medium; whether this mechanism trigger or shut down star formation is still an open-ended question.

Black holes and Velocity Dispersion in Galaxy Bulges

Stars do not move in an orderly fashion in stellar populations. Although they are still gravitationally bound as a whole, the local motions of individual stars are random in different directions. This results in a range of stellar velocities in stellar populations. Stars in the bulges of galaxies also demonstrate this velocity spread, usually denoted as σ. A famous relation known as the MBH-σ relation is a linear relation between mass of a black hole and the velocity dispersion of the stars in the galactic bulge. It implies that the growth of black holes and the growth of galaxy bulges are related and not independent of each other. For those who are interested in learning more about velocity dispersion in stellar populations, this is an Astrobites article which explores the relation between velocity dispersion and galaxy evolution.

Black holes and Quasars

Because most structural growth happened at early times, investigating the tight correlation between black hole mass and galaxy properties at early times is important in understanding the origins of these correlations. To do this, astronomers look for the most distant sources in the Universe (that are bright enough to be observed today), and they happened to be high-redshift quasars. Quasars are high-energy sources fueled by accretion of matter onto supermassive black holes (MBH ~ 108-9 M), while high-redshift quasars just means they are formed very early on in the Universe.  For example, the highest redshift quasar known to date is at z (redshift) = 7.1, corresponding to a time less than 8 Myr after the Big Bang. Studying the properties of high-redshift quasars (z >~6) and their host galaxies will enable us to understand the interaction between black holes and their host galaxies in the early Universe.

In this paper, the authors studied how the star formation rate (SFR), velocity dispersion σ, and dynamical mass Mdyn of high-redshift quasar host galaxies depend upon their black hole accretion rates and masses. A galaxy’s dynamical mass takes into account its dark matter content and so tends to be larger than its stellar mass. Since most dark matter are thought to live in bulges, the dynamical mass of a galaxy is treated as the bulge mass. The authors took observations of two z~6 quasar host galaxies from the Canada-France High-redshift Quasar Survey (CFHQS) using the Atacama Large Millimeter Array (ALMA). ALMA is used for their observations because it has the required sensitivity to probe the SFR and dynamical masses of galaxies at high redshifts. They combined these observations with their previous study of two more high-redshift quasar hosts to give a total sample of four.

The authors discovered that high-redshift quasar hosts have rather low SFR, despite having very high black hole accretion rates. This is different than what is observed in the low-redshift Universe, where star formation rate increases similarly as black hole accretion rate, as studied in this paper. Figure 1 shows the star formation rate against redshifts, where LFIR  is used as a star formation tracer. Note the clear rise in LFIR up to a peak at z~2 followed by a decline to z=6, where the rise at z~2 is attributed to the increase in SFR in massive galaxies within this redshift range.


FIG 1 – Mean far-IR luminosity for quasars at various redshifts. The blue square is one of the quasars in this paper (the other quasar is marginally detected, and so excluded from the sample). The magenta curve is a model prediction of how LFIR varies as a function of redshift. The low-redshift quasars show about 4x increase in LFIR from z = 0.3 to z = 2.4. However, the z~6 quasar has a comparatively lower LFIR, which implies lower SFR compared to their lower-redshifts counterparts.

The authors also studied the MBH-σ and MBH-Mdyn relations of their high-redshift quasar hosts. Historically, σ is measured from galaxy bulges but since bulges are less common at high-redshifts, they used [C II] line to determine σ instead. The [C II] line is also used to determine dynamical masses of the host galaxies. Figure 2 illustrates these relations. Their quasars are distributed around the local MBH-σ relation, albeit with a much larger scatter well beyond the size of their error bars. The same can be said of their MBH-Mdyn relation.


FIG 2 – The left plot shows the MBH-σ relation for various z~6 quasars. Quasars from this paper are shown as blue squares. The black line is the local standard MBH-σ relation (Kormendy & Ho 2013) with some scatter (shaded region). The right figure shows a plot of MBH versus host galaxy dynamical mass Mdyn for z~6 quasars. Again, the black line is the local correlation with some scatter (Kormendy & Ho 2013). Quasars from this paper lie within the local relation, compared to the most massive black holes (green circle and cyan diamond points) which display large offsets.

The fact that high-redshift quasars lie on the MBH-σ relation suggests that the host galaxies have undergone substantial evolution to acquire their current high dynamical masses. Nevertheless, one wonders why this mass accumulation did not lead to a high SFR, as suggested by their low LFIR. One reason could be strong jets from the central quasar inhibiting star formation. The second reason could be that LFIR is not a good tracer of star formation at such high-redshifts. Another star formation diagnostics is L[C II], and the authors noted that using L[C II] to trace SFR would bump it up by a factor of 3 for one of their quasars. As such, they recommended higher resolution [C II] observations of their quasars to more accurately constrain the various correlations between black holes and their host galaxies at high-redshifts. Until then, their current results showed that galaxies around supermassive black holes in the early Universe are just too cool to be grouped with everybody else.

Cassini begins a year of icy moon encounters with a flyby of Rhea by The Planetary Society

At last! Cassini is orbiting in Saturn's ring plane again. I do enjoy the dramatic photographs of Saturn's open ring system that Cassini can get from an inclined orbit, and we won't be getting those again for another year. But with an orbit close to the ring plane, Cassini can repeatedly encounter Saturn's icy moons, and icy moon flybys are my favorite thing about the Cassini mission.

In Pictures: DSCOVR Headed for Deep Space by The Planetary Society

On Wednesday evening, a SpaceX Falcon 9 rocketed into orbit with DSCOVR, the Deep Space Climate Observatory. Here's a photo and video roundup.

February 12, 2015

Chess in Business by Simon Wardley

I want you to imagine you're playing a game of chess on a computer against another opponent. But rather than looking at the board, I want you to pretend that you're unaware that a chess board even exists.

When you play Chess, you have a control panel and it looks like this ...

When White moves then one of the pieces on your control panel flashes ...

You might press the piece or choose another. Let's say you select the Bishop, then White sees this on their control panel ...

and so the game continues.

Now obviously pieces are moving on the board but both players are unaware that a board exists. We shall assume some process where a random piece of the type selected is chosen, moved a random number of squares in a random direction according to the rules allowed by chess.

Eventually some player will press a button and they will win! People will take note, possibly even copy their moves. You'll probably get people compiling lists of combination presses, all sorts of wonderful books and sorts of information will be generated to help you press the right button. Of course, all of this will be story telling, anecdotes, gut feel and alchemy.

Now imagine, that one day, you play another individual but unbeknown to you, that player doesn't have a control panel. In fact what they see is something almost magical ... this ...

You play your game through your control panel, using your favourites "special" combinations and whatever Big Name Strategy consultancy says is the latest set of popular presses. No matter what you do, you're going to get your arse kicked. Even your team of overpriced strategy consultants harping "Press the Rook, Press the Rook" won't save you.

But worse than this, the more you play this person then the better they will become and the arse kickings will get even more frequent.

Mapping is like suddenly exposing the chess board in business and playing against companies who view the world through the equivalent of a control panel. This is why it always makes me smile to hear people talking about business strategy as either like chess or going beyond it. Most business strategy is nowhere close to this - it's more story telling, anecdotes, gut feel and alchemy.

A short note on Dimensions in Time and canonicity. by Feeling Listless

TV Gen of Deek has an exhaustive blog post about the joint history of Doctor Who and Eastenders. Everything is going swimmingly with an exhaustive description of Dimensions in Time before it has this to say about it:

"Don’t worry kids though, it’s not canonical."
If you say so. I've never thought it wasn't. As the TARDIS Datacore explains, There's a reference to it in the Virgin New Adventure First Frontier which suggests the Doctor was well aware he was walking around in a soap opera. If you accept that the Eastender therein was some kind of Auton simulacrum (explaining Pauline) or some such, it's perfectly fine.  Even if you don't, it could be the usual time differential business which leads Doctors to forget some moments when they have met up (ala The Day of the Doctor).  Either way, it may be rubbish, but it still counts.

But then, of course, in Doctor Who there is no such thing as ‘canon’. Unless there is.

Running a short Open Space meetup by Aptivate

“Open Space is mysterious, chaotic — we are stepping into the unknown. How will we organize ourselves? What will we accomplish? We won’t know until it’s over, and maybe not even then.” I said something like this to a group of international development professionals, members of the Bond T4D and MEL communities of practice, on Tuesday the 27th of January, 2015.

I often say this sort of thing when introducing the Open Space method of organising events, and it is true. But...

Birthday by Albert Wenger

It is my birthday today and thanks to Susan’s planning we had a fantastic day as a family starting with a yummy blueberry pancake and bacon breakfast. After a great Korean lunch we went and saw Cabaret. The kids were a bit stunned by how dark it is. There was a “why see *that* on your birthday?” reaction. But it was also clear that Cabaret had left a lasting impression. And it will be something we will talk about not just today but on many future occasions. As such it is the best of all possible gifts — a memorable experience we all shared. Thanks, Susan!

Evaporating Envelopes for Earth-like Results by Astrobites

Artist's illustration of planets orbiting an M-dwarf star (NASA/JPL-Caltech).

Artist’s illustration of planets orbiting an M-dwarf star (NASA/JPL-Caltech).

Title: Habitable Evaporated Cores: Transforming Mini-Neptunes into Super-Earths in the Habitable Zones of M Dwarfs
Authors: Rodrigo Luger, Rory Barnes, Eric Lopez, Jonathan Fortney, Brian Jackson, Victoria Meadows
First Author’s Institution: University of Washington Astronomy Department & Virtual Planetary Laboratory
Status: published in Astrobiology

M-dwarf stars are a great place to look for habitable planets. Since these stars aren’t very luminous, their habitable zones (HZs) are very close-in—a tenth or less of Earth’s distance from our sun—which means frequent and easy-to-spot orbits via the transit method. M-dwarfs’ low mass means that Earth-mass planets are easier to spot than around more massive stars, too, since they can have a more pronounced effect via the radial velocity method, too.

There are downsides to trying to live around an M-dwarf, though: low mass for a star means slower evolution, which means a longer time in the violent, hyperactive phases of stellar youth. A terrestrial planet present for those flares and bombardment could be sterilized. And besides, a planet that would form as close in as an M-dwarf’s habitable zone would likely be very small, less than a third of Earth’s mass, and very dry, since water and the volatile compounds that make a rocky planet habitable are mostly found farther out in the protoplanetary disk when planets are forming. No matter how perfect your level of insolation, small, dry, and sterilized does not a habitable planet make.

But wait. We know that planets don’t have to form where they end up. Disk-driven migration and planet-interaction scattering events can pull planets from more distant formation locations into narrow paths around their stars. (That’s the going theory for all those hot Jupiters.) Today’s paper looks not at whether an Earth-like planet could migrate into the habitable zone of an M-dwarf, but instead at whether a Neptune-like planet could migrate in and, perhaps, in the process, become Earth-like somehow.

The authors of this paper investigate whether the properties of an M-dwarf that might make it inhospitable could also give it transformative powers over a Neptune that got drawn in close. Could the star’s gravity and violence strip away a planet’s thick atmosphere, or envelope, to reveal a habitable core?

Short answer: in many cases, yes! Long answer: there are many considerations. The interactions between migration, tidal forces, orbital eccentricity, planetary mass, planetary radius, stellar luminosity, and stellar activity over time are not only incredibly complex but also not yet fully understood. (If you’re interested in the specifics, I strongly recommend checking out the paper itself. Complexities of planet-modeling aside, the authors also do a lovely and thorough job of explaining, assessing, and building on many strands of prior research, Burke’s “parlor” in action.)

The primary forces that could strip a Neptune of its core are gravity and heat. When a planet migrates in toward its star, both of these forces, coming from the star, affect it more strongly (heat by way of stellar radiation). On the gravity front, the planet’s Roche lobe becomes smaller. (Roche lobe term usually applies to binary stars, but works for our planet-star system here as well.) This means that the planet’s area of gravitational influence contracts: gas that once was bound to the planet no longer is. Whoosh.

By the time a Neptune gets to an M-dwarf’s habitable zone, then, it’s already lost a good deal of its envelope. And then it’s all up close to the star’s potentially violent business: extreme ultra-violet radiation, or XUV, can heat the atmosphere so powerfully that rather than individual atoms just heating up and flitting away, per Jeans escape, whole chunks of the atmosphere blow off at once. In this XUV blow-off mechanism, heating leads to expansion of the outer envelope, accelerating light ions to supersonic speeds. If the gas velocity exceeds the planet’s escape velocity, it escapes en masse, taking heavier molecules with it.

That’s the model, but whether or not these mechanisms will strip a planet of its entire envelope depends on many factors, some of which are unknown. For example, M-dwarf evolution has not been studied enough to be settled. If an M-dwarf emits powerfully in the XUV range, then it could fully evaporate a Neptune’s gas envelope. But if its emissions skew more toward X-rays? Our Neptune won’t have as good a chance.

Other variables in this complex system are better known but are still variable. The authors of this paper tested many, many variables and relationships to see if this process is possible and they found that, yes, often enough, it probably is. The sweet spot: a planet with a rocky core of a bit less than Earth’s mass and a gaseous envelope of about the same mass of hydrogen and helium migrates early in the course of planet formation to the inner habitable zone of an M-dwarf that emits a lot of XUV radiation. This means that super-Earths in the HZs of M-dwarfs aren’t likely to be the evaporated cores of former Neptunes. They more likely formed in situ, which means that even if they’re big, they’re likely dry and even sterilized. But Earth-mass planets in M-dwarfs’ HZs stand a chance of having once migrated in, and though they’ve lost their gaseous envelopes, they may have held onto their water and interesting chemistries and could be home to, possibly, just maybe, life.

Europe's Experimental Spaceplane Completes Successful Test Flight by The Planetary Society

The IXV spaceplane, designed to demonstrate reentry technologies, splashed down in the Pacific Ocean this morning after a successful, 100-minute test flight.

February 11, 2015

On Government, platform, purchasing and the commercial world. by Simon Wardley

Whenever you run a large organisation, the first step should always be to get an idea of what you do, how much you do, what each transactions costs and if you're a commercial company then what revenue does each transaction make. This is the most super simple basic information that every CEO should have at their fingertips.

In the case of UK Gov, there are 784 known transactions of which 700 have data on volume and 178 have published cost per transaction. These are shown in figures 1 & 2 (source - UK Gov High Volume Services Transactions)

Figure 1 - Volume of transaction

Figure 2 - Cost per Transaction

In the UK Gov they know that the cost of transaction of a memorial grant scheme claim is £1,085 but a driving license replacement only costs £7.85. Equally they know that we have over 393,000 charity applications, change of details and gift aids but only 19 Human Tissue Authority Licensing Applications. 

Any company of significant size will have a number of core transactions though it probably won't be anywhere near the scale and complexity of UK Gov.  But let us take a hypothetical global Media company with 50 or so different core transactions from the Commissioning of TV programmes to Merchandising to a Games Division to Broadband provision to Mobile Telephony to News Content to Radio to ... etc etc.

Now, each of those transactions should represent the provision of a user need. That is after all where value should be created. Now, the purpose of mapping is not just to force a focus on user needs, improve communication, improve strategic play, use of appropriate methods, provide a mechanism of organisational learning and mitigate risks but also to remove duplication and bias.

If you map out multiple different transactions, you will find that common elements exist between them and that bias exists in the way groups treat those activities. (see steps 4 and 5 on mapping and if you haven't read that post, you need to in order for the following to make sense). 

By aggregating multiple maps together, you can create a view of how things are treated (see figure 3) and then determine (by looking at clusters) how things should be treated (see figure 4)

Figure 3 - An Aggregated view

Figure 4 - How things should be treated.

There are two important parts of figure 4 - first is the duplication (i.e. how many instances of this thing exist in our maps) and second is how it should be treated (determined by looking at the clusters of how it currently is treated) which is a necessity in overcoming bias. It's not uncommon to find 18 different references and examples of compute in your maps and have half a dozen groups arguing that somehow their "compute" is special to them and unlike anybody else's.  If you're up for a laugh, try rules engines (a particular favourite of mine) if you want endless arguments by different groups about how their near identical systems are unique.

Now this aggregate map also has other purposes. Let us suppose you want to build a platform by which I mean not some "monolithic does everything" platform (a guaranteed route to quick failure) but a mass of discrete component services exposed through APIs and often associated with a development environment. When you're building those component services, the last thing you want to build is the uncommon and constantly changing component (i.e. those in the uncharted space). What you want to build is the common and fairly commodity like component services which are therefore well understood, stable and suitable for more volume operations.

Now, fortunately, the aggregated view shows you where these are in all your value chains. Sometimes, you'll find the market has already provided a component service for you to use e.g. Amazon EC2 for compute. There maybe reasons such as buyer / supplier relationship that you may wish to choose a particular route over another but at least the aggregate maps points you to where you should be looking. 

In other cases, you'll find no market solution even though a component is common and commodity like and this is your target for building a component service. For example, if Fraud Analysis is a common component of many of your value chains and assuming it's not provided in the market then this is a component service for you to consider. Now, in the case of Gov that component service could be provided by a central groups such GDS or even a department. For example if DWP was particularly effective at Fraud Analysis then there is no reason why it shouldn't offer this as component service to other departments. Ditto GCHQ and large scale secure data storage.

In fact, Government has a mechanism - known as G-Cloud - where departments could compete with the outside market to provide common components to others. All it requires is some central group such GDS to identify from all the maps of different value chains what the common components should be and to offer the opportunity to a department to provide it (and if they don't accept then GDS could look to provide it itself).

But let us not digress and go back to our Media company. We've taken an aggregated view, we've challenged bias in the organisation, we've identified duplication and determined where we're going to consume services from other groups and what services we're going to provide (hence building our platform of component services), we've even added some strategic play into the mix using open source (see previous post). We now subdivide this environment into contract components (for reasons of risk mitigation), apply the right methods and build a suitable team structure. Our map and understanding of the environment hopefully took only a few hours and now looks something like this.

Figure 5 - Our Map

Now, a key part of this map is it identifies what should be outsourced, use utility services, use other department (or platform) components, built in a six sigma like fashion with a focus on volume operations VERSUS that which needs to be built in-house, using agile techniques and in a more dynamic environment VERSUS that which should be more off the shelf, lean etc.

Obviously, we're aware everything is evolving between the two extremes due to supply and demand competition hence the necessity for a three party systems like Pioneer, Settler and Town Planner (don't get me started on the bimodal gibberish). We're all good.

BUT ... in the same way we need different methods for project management in any large scale project ideally scheduled together with Kanban, then we also need different purchasing tactics (see figure 6).

Figure 6 - Project management and purchasing methods

In other words, as with any large system where one size fits all project methodologies are ineffective, the same is true with purchasing. Any large scale system requires a mix of time and material, outcome based, COTS & fixed contract and unit / utility charging. Each has different strengths and merits as with project management methods. All activities evolve and how you purchase them will change accordingly.

So a couple of points ...

1) Long term contracts for any component is a big effing "no, no!" unless you're talking about a commodity / utility. The act will evolve and so will the method of purchasing.

2) Treating massive systems as a whole and applying a single size method e.g. six sigma everywhere, outsource everything, fixed price contracts everywhere are a big effing "no, NO!" with cream on top. This is almost guaranteed to cause massive cost overruns and failure.

Unfortunately, prior to the coalition, we had a long spell in UK Gov IT where single size methods such as outsource everything, fixed price contracts became the dogma based upon a single idea that if you can get the specification right then it'll all work. 

This idea is unbelievably ludicrous. Take a look at a map of any complex system such as the IT in High Speed Rail (see figure 7).

Figure 7 - Early Map HS2 IT

Whilst some elements of that map are industrialised (suitable for outsourcing to utility suppliers with known specifications), large chunks of that map contain elements that are uncharted i.e. they're novel, uncertain and constantly changing. We DON'T know what we need, the specification will change inevitably, these components are about exploration and no amount of navel gazing will write the perfect specification. If you plaster a single size method across the lot then you'll inevitably incur massive change control costs and overruns precisely because a large chunk of the map is uncertain, it will change!

However, alas we had this single size idea and not only did the single size approach lead to massive cost overruns and a catalogue of IT disasters, it also had the nefarious effect of reducing internal engineering capability. This had serious consequences for the ability of Government to challenge (something we covered in the 'Better for Less' paper and the need for an "intelligent customer" and a Leverage & Commoditise group to ensure challenge). 

In UK Gov, they now have that more "intelligent customer" in the guise of GDS and the Digital Leaders Network. We also have that challenge function in GDS spend control. 

I kid you not, but prior to this I saw an example which was as close enough as it makes no difference to ...

1) Department wants to do something, asked their favourite vendor to do the options analysis 
2) The options came back as Option A) Rubbish, Option B) Rubbish, Option C) Hire us - Brilliant. 
3) The department then asked how much for option C) and haggled a small amount on a figure that was in excess of £100 Million. 

This was for a project that I couldn't work out how you could spend more than £5-£10 Million on. This was not "challenge". This was handing over large fists of cash. That has now fortunately changed. However, you have to be extremely careful to avoid one size fits all methods otherwise it's easy to fall down the same trap. 

So, this brings me to purchasing. Any group involved in this has to know how to use multiple purchasing methods to get the best result. It also has a need which it supplies. That need is "to help the Engineering capability deliver and develop efficiently and effectively". 

This is key. 

Purchasing has to be subservient to Engineering, it has to support it, advise on the best use of contracts and enable it to do the job. If it isn't subservient then you run the future danger of a one size fits all policy being applied and the same mess highlighted above being created.

Now, those various types of work (whether T&M, outcome based, COTS, fixed price or utility) can often be labelled "services" and there is no reason why you can't have a competitive market of these. In the UK Gov case, G-Cloud seems a suitable framework.

So, why do I mention all of this?

First, there was this discussion I had with @harrym on the Digital Services Framework and UK Gov Purchasing.

Then, there was this rather disturbing post by Chris Chant on everything unacceptable with UK Purchasing 

Now, this disturbs me. The real concern is the accusations that CCS is overstepping its mark, not acting as the support function that it is and imposing policy. If that is the case, I can see why Chris argues that CCS should be scrapped or at least (in my view) absorbed into GDS.

Oh, why the commercial world bit in the title?

Let's be honest, I lied at the beginning. When I said this is the "most super simple basic information that every CEO should have at their fingertips" then we all know that the vast majority of large companies in the commercial world have no idea what transactions they do and what volume of transactions they have beyond some basic revenue lines.

Without this, it's fairly farcical to assume large corporations have any concept of user needs, mapping, situational awareness or strategic play beyond simply copying others. This sort of discussion is light years ahead of where most major corporates are (there are a few exceptions like Amazon). The problems that UK Gov faces are problems that most corporates could only dream of. 

I only put that line in because some troll often pipes up and says "UK Gov should be more like the commercial market". Yeah, for the vast majority that is clueless and hopeless, waiting to be disrupted whilst pretending they're master chess players without ever looking at a board. No thanks, this is our Government we're talking about.

Brontosaurus BDSM, Werewolf Marines, and Serious Social Issues: Self-Publishing in the Wild by Charlie Stross

Hello. I'm Rachel Manija Brown, co-author (with Sherwood Smith) of the YALSA Best Book for Young Adults, Stranger, and its sequel, Hostage. Stranger was published by Viking. Hostage was self-published. More on that in a moment.

Hello again. I'm also Lia Silver, author of the urban fantasy/paranormal romance series, Werewolf Marines, which is about werewolf Marines. Also PTSD and breaking the rules of at least two genres. (In my "Rachel no-middle-name Brown" identity, who doesn't write anything but treatment plans, I'm a PTSD therapist.)

And hello yet again. I'm also Rebecca Tregaron, author of the lesbian romance/urban fantasy/Gothic/romantic comedy/culinary mystery/everything and the kitchen sink Angel in the Attic, and the lesbian erotica, "Bound in Silk and Steel," in Her Private Passion: More Tales of Pleasure and Domination. (That's an anthology of lesbian erotica with 100% of its profits donated to the International Gay and Lesbian Human Rights Commission. Please consider purchasing it or its companion gay anthology, His Prize Possession if your interests include human rights, lesbian spanking, or gay tentacles.)

Lia and Rebecca are self-published. Rachel is traditionally published and self-published. Since you probably already know plenty about traditional publishing, I'm here to talk about self-publishing.

If you click on the cut, you will eventually get to a discussion about the indie erotica subgenre about sex with dinosaurs, minotaurs, and Bigfoot.

All discussion below is of the American publishing industry, because that's what I know about. I would be interested to hear about publishing in other countries.

The advent of the e-reader brought about a dramatic change in self-publishing. As Sherwood discussed in her blog post, self-publishing has a very long history. But with the rise of publishers as we now know them, self-publishing became both denigrated and difficult. It's the rare person who can make a living, or even break even, selling their own books out of the trunk of their car.

And then there were e-books. In particular, there was Amazon. Over the course of a few years, self-publishing became technically simple and potentially rewarding. "Should I self-publish or traditionally publish?" is now a serious question for many new writers, who formerly would not have considered self-publishing until and unless they'd been rejected from every traditional publisher in existence.

There are plenty of reasons to traditionally publish, such as advance money, your book placed in bookstores, prestige, library sales, lack of interest/skill at the business aspects of being a writer, potentially better publicity, the publisher providing editing and cover art for free, distrust of Amazon, and so forth. These are all clear benefits, so I'll leave it at that. Instead, I'll focus on the reasons writers choose to self-publish.

One of the most obvious reasons to self-publish is that, for whatever reason, your book is unconventional, cross-genre, hard to categorize, hard to market, rule-breaking, or simply in an unfashionable genre. My own Angel in the Attic is both cross-genre and peculiar; the Werewolf Marines books have very unconventional elements for their genres. Courtney Milan, formerly a Harlequin romance novelist, turned to self-publishing in part because the books she wanted to write were too quirky and her protagonists didn't fit the popular mold. As her FAQ says, The notes on everything would have come back as "NEEDS MOAR JERK." Space opera with intricate worldbuilding seems to have largely fallen out of fashion, which may be why Ankaret Wells self-published her charming Maker's Mask books.

But there are more serious reasons. Belonging to a minority or marginalized group means that you face obstacles that members of the majority don't, and traditional publishing is no different from any other field in that regard. Obviously, some books by diverse writers and about diverse characters do get published traditionally. But the barriers are real.

Before we sold Stranger to Viking, an agent offered us representation on the condition that we make a gay character straight or remove his romance and all references to his sexual orientation. If you read the comments of the article, you'll see many more writers discussing their experiences with agents and editors requesting that they make characters white or straight. Our experience was not a one-of-a-kind fluke, but a single example of a real problem.

I'm sure you can think of YA novels that have LBGTQ characters. But it's like naming female heads of state: everyone can name a few, but they tend to be the same ones. That's because there's not very many of them, so the few that exist are memorable. As of 2011, less than 1% of all YA novels had any LGBTQ characters at all, even minor supporting characters. (By 2014, the percentage had risen to 2%, but most were from LGBTQ small presses.) And this doesn't just involve sexual orientation: Characters in children's books are almost always white.

So where does this leave the writers who belong to those underrepresented groups or just want to write about them? A few do succeed in traditional publishing. Many more write for small presses. But increasingly, they're turning to self-publishing, where they can connect with readers who either want to read about people like them, or are simply tired of only reading about straight white able-bodied British or American characters.

Dr. Zetta Elliott writes here about independently publishing African-American children's books. Neesha Meminger wanted to give South Asian teens something fun to read. Courtney Milan writes witty romances with protagonists who have mental illnesses, are people of color, or (in an upcoming novel) are transgender. And then there's the booming genres of gay and lesbian romance. Not to mention the indie writers of contemporary and paranormal (heterosexual) romance, thrillers, science fiction, and romantic comedy whose books are similar to those published traditionally except that the protagonists are not white.

I self-published my Werewolf Marines books partly because of those issues. The heroine of Laura's Wolf is Jewish. I can count the numbers of Jewish heroines I've encountered in traditionally published romance or urban fantasy on the fingers of one hand. The hero of Prisoner and Partner is Filipino. Asian-American heroes in those genres are about as common as Jewish heroines. That's not to say that it would be impossible to sell a book in those genres with the protagonists I had. But I didn't want to risk banging my head against the wall for years and years, and then find that it was impossible.

Which brings us to the next reason writers self-publish. Time. Traditional publishing has always operated on a long timeline. But it seems to have gotten longer and longer for all but the most successful authors. Indie sf/fantasy author Andrea Host took that route after a publisher took ten years to consider her manuscript.

Sherwood and I chose to self-publish Hostage largely because of the issue of time. Stranger was published in November, three years after it was purchased by Viking. Though we turned in Hostage a year before Stranger came out, it wouldn't have been published until a minimum of two years after Stranger. Similar gaps between subsequent books would have been likely, no matter how fast the books were written. We felt that such long delays between books in a series are deadly for sales, and often lead to them getting canceled midstream.

Traditionally published writers who are prolific may put out books under pen names, or simply have manuscripts pile up. Prolific indie authors, especially those who write in popular genres like romance, can publish books as fast as they can write and edit them, and some parlay that into devoted audiences and large sales. Sherwood, who is a prolific writer, has self-published a number of books.

It turns out that readers have no qualms about reading more than one book by the same author in a year - far from it! They sign up for their favorite authors' mailing lists, get emailed when a new book is released, and automatically buy every one. Since indie authors tend to price books lower, at an average of $4.99, this is affordable for the readers and profitable for the writers. (On Amazon, the royalty from a $4.99 ebook is $3.50).

The ability to set the price is a huge benefit to indie authors, and was another factor in the decision to self-publish Hostage. Few readers will buy an e-book priced over $7.99 by an author who's not already one of their very favorites. Not having your debut novel sell at $10.99 as an e-book is very useful for indie authors.

They can also do things like set the first book in the series free, as I recently did for my own Prisoner (Werewolf Marines). I hope to lure new readers to the sequel, Partner, which will be released next week. "The first hit is free" is still an excellent marketing tactic. It has been used with great success by adventure sf author Lindsay Buroker and rule-breaking romance novelist Courtney Milan, among others.

But, of course, what you really want to read about is dinosaur sex. Such as Ravished by the Triceratops (Dinosaur Erotica). As Sherwood noted in her post on the history of publishing, sex sells and always has. Any kind of sex. Acrobatic sex. (Not worksafe). Gay sex. Tentacle sex. (Not worksafe). Incest. Oil Change 2: Racing Hearts (Mechaphilia Transformation Erotica)

A number of writers are doing quite well selling short erotic stories for between 99 cents and $2.99. The latter may seem outrageous if you think of it as the price of a short story. It's less so if you think of it as the price of an orgasm.

Writers have always been able to sell erotica. Indie publishing just makes it easier. And, as has always been the case, the writers and readers are in a constant battle with pearl-clutching moralists who really hate the idea of women enjoying sex. Even fictional sex. Oh, yes. As with romance (which nowadays often contains extremely explicit sex scenes) the majority of the writers and readers of indie erotica appear to be women. (Based on indie writers' forums, asking around, and personal knowledge - of the many erotica writers I know, about 70-80% are women.)

Periodically, Amazon and other vendors ban kinks deemed too shocking, so erotica writers need to stay on their toes. One month lactation kink may be banned, and the next month it's rape fantasy. Incest is banned, but pseudo-incest (step-relatives) is allowed. And so forth.

There are several ways that readers and writers can evade the misogynists and moralists to come to a mutually beneficial agreement. One is to set up their own company. Bestselling erotica author Selena Kitt created Excessica for exactly that reason. (Even outside of the erotica genre, some writers simply don't want to deal with Amazon or other middlemen. Writers' collectives like Book View Cafe enable writers to keep 95% of their profits in exchange for working to support the collective.)

But most indie writers do make most of their money on Amazon. To evade the censors, they use an array of code words, which their readers then learn. (All these code words are for use in the titles and blurbs; you can use the real words in the books themselves.) "Taboo" means pseudo-incest, since both that word and all family terms are banned in erotica. "Feeding" means lactation, since the word "milk" is banned in erotica. "Possession" or "captive" substitutes for the banned "slave." And so forth.

Marketing on Amazon is done largely by inputting keywords when uploading your book. Keywords and phrases are search terms readers use. For instance, "gay young adult novel" or "strong female characters" or "zombie steampunk." In erotica, you can use the real terms in keywords even if they're banned from blurbs. So if you go to Amazon and type in the banned word "orgy," you'll get books that used that as a keyword but have discreet titles like The Arrangement. (Or less discreet titles that at least don't include "orgy.")

Amazon is aware of this, of course. It seems that they're less interested in outright banning all erotica than in banning certain types and in keeping a virtual brown paper wrapper over graphic language visible in the storefront.

As for the dinosaur, minotaur, and Bigfoot sex, it's part of a subgenre called "monster sex," which is erotica about mythical beings. Bestiality is banned, so you can't write about sex with a bull. (Unless you're Ovid.) But you can write about sex with a minotaur.

Think erotica readers are freaks? Fantasies about strange sex are nothing new. Think of Pasiphae and her bull, or Zeus transforming into a swan or a shower of gold. Think of what you fantasize about when you're alone in bed. Yes, those fantasies. The weird, wild, strangely specific ones. Erotica isn't about realistic depictions of sex, it's about sexual fantasy. People don't pay $2.99 to fantasize about having safe, sane, consensual, protected sex with an ordinary person, in an ordinary way, at an average level of mutual satisfaction. That's what real life is for.

Then again, many of the authors are clearly just having fun with the whole thing, as are their readers. (Not worksafe, but hilarious. My favorite is the gay billionaire living jet plane.)

But the fun is also serious business. Lurk on indie erotica and romance forums, and you'll hear lots of stories about women and men, some of whom had never written before, writing themselves out of poverty and unemployment. Many indie authors fail. But many succeed. Erotica and romance are genres particularly known for enabling writers to support themselves and their families when they were about to despair. And they're doing it by providing their readers with the joy of sexual fantasy, at only $2.99 a pop.

That's what I call a happy ending.

Note: When commenting, please don't judge people for what they enjoy reading, even if you personally find it offensive or unappealing. What people like in fiction often has nothing to do with their real life opinions and preferences. Enjoying murder mysteries doesn't mean that you want to murder people, and enjoying fictional rape fantasies doesn't mean that you want to rape or be raped, or think that rape is okay. Etcetera.

My Favourite Film of 2009. by Feeling Listless

Film One of the key locations in Glorious 39 is Walsingham Abbey in Norfolk. Here's a contemporary local news clip which talks about the making of the film. The Glorious 39 piece begins two minutes in (after a piece about another country house which is going through just the sort of stresses that might themselves have appeared in a Stephen Poliakoff tv series):

As the expert in the clip describes, one of the key shots in the film is in the opening sequences as DP Danny Cohen's steadicam swoops through the medieval abbey, its giant incongruous, breathtaking arches standing impenetrably against the newer house. Like that house, Glorious is a newer addition to the landscape of a family which has existed for many year but, which like the abbey, is apparently standing firm but crumbling.

Machine Intelligence Progress is Accelerating by Albert Wenger

I don’t (yet) worry about a Terminator or Matrix like scenario where machines take over the world and subjugate humans. But it is useful to remind oneself of the rate of progress that has been taking place. I started programming computers in my early teens which is now almost 35 years ago. During that the first 25 of those years progress towards any kind of artificial intelligence was painfully slow. For instance, as late as 2004 when DARPA ran the first Grand Challenge it looked like we were a long way from a self-driving car — the best vehicle made it barely 8 miles into a closed course in the desert. But then development took off and by 2012 only 8 years later Google driverless cars had covered several hundred thousand mies on public roads.

Other areas have seen similar acceleration. For instance, image understanding tasks such as facial recognition used to be notoriously difficult. But in 2014 an algorithm for the first time outperformed humans on a large data set of faces. Further advances in image analysis are coming from the progress that has been made with deep learning. This refers to a set of technologies that allow for the formation of intermediate concepts based on observed data. That turns out to be closer to how the human brain works and is producing some spectacular results. For an interesting demo of the latest capabilities you can check out the recognition capabilities at Clarifai, a New York based startup. Again the rate of change here in the last few years completely outpaces anything we have seen previously.

And here is yet one more example of progress. Realistic walking was considered to be one of the things robots would have a very hard time with. And we are still not there, but compare this video of an early Big Dog from Boston Dynamics (now owned by Google) with the latest generation called Spot

Now one might look at these in isolation and see only highly specialized somewhat awkward machines that can’t possibly have much of an impact on say the labor market. But that would be an error. Machines can and will do things quite differently and that is what will let them outperform. And no single machine needs to outperform humans at all tasks. Instead all it takes is for specialized machines to do so on the tasks they were designed for.

That is why I will keep writing here on Continuations about such things as Basic Income Guarantee. Having followed the field of machine intelligence for many years I am excited by what now lies within our reach and want us to push forward. We just should simultaneously work on all the public policies that will be needed. 

Opening up MEL technology by Aptivate

Technology for monitoring, evaluation and learning (MEL): we need to address complexity, interoperability and affordability.

SN1997bs’s True Identity: Impostor or Not? by Astrobites

Title: LOSS’S First Supernova: New Limits on the “Impostor” SN 1997bs
Authors: S. M. Adams & C. S. Kochanek
First Author’s Institution: Ohio State University
Paper Status: Submitted to MNRAS
Bonus: Video of the first author explaining this paper!

As a scientific field driven by observation, astronomy has a knack for grouping objects together that kinda-sorta look alike, regardless of their underlying physics. Take, for example, our supernova classification scheme. While all supernovae mark the explosive death of a star, they have a wide range of properties. Historically, supernovae are broadly grouped into two categories based on their spectra: Type I’s (which lack signs of hydrogen in their spectra) and Type II’s (which have hydrogen lines). As it turns out, these categories are largely unrelated to the actual physical process driving these explosions. Confusing, right? In today’s paper, the supernova community faces another case of bad taxonomy: objects collectively known as “supernova impostors.”

Supernova Impostors: Theory vs Practice

In theory, supernova impostors are extremely bright explosions of stars which look a lot like supernovae. Unlike supernovae, they leave behind most of the progenitor star surrounded by a shell of newly formed dust. The most famous example of such an object is Eta Carinae’s larger star which has undergone multiple explosions throughout the 1700 and 1800s. While the exact mechanisms of these blasts are uncertain, they appear commonplace in massive Luminous Blue Variable stars (LBVs) like Eta Carinae. Figure 1 shows Eta Carinae as seen by Hubble, surrounded by its homunculus nebula.

Eta Carinae in optical

Fig. 1: Eta Carinae as seen by Hubble. Eta Carinae is a supernova impostor that has undergone several eruption events.

In practice,  objects are often labeled “supernova impostors” without knowledge of their true origin. Supernova impostors, at first glance, look like dim type IIn supernovae. Type IIn supernovae are core-collapse supernovae that have narrow hydrogen emission lines in their spectra. The narrowness of these features are odd because supernovae are fast, energetic events that typically produce broad features; the narrowness implies that the supernova is interacting with dust that has veiled the progenitor. So if an astronomer sees a transient that doesn’t look energetic enough to be a true IIn supernova, they might label it as a “supernova impostor”. But not all so-called impostors seem to be explosions from LBVs like Eta Carinae. In fact, today’s paper argues that one of the archetypes of supernova impostors, SN1997bs, may not be an impostor at all!

SN1997bs: An impostor or the real deal?

As the name suggests, SN1997bs was first discovered in April of 1997 and was classified as a type IIn supernova. The supposed-supernova then faded to be even dimmer than its pre-explosion progenitor star, and appeared to hold steady in the early 2000s. Originally, the flattening of the light curve was interpreted as the steady flux of the explosion’s survivor – making SN1997bs a supernova impostor.

Today’s paper relies on recent photometry to answer the basic question: is the star really still there? The authors monitored SN1997bs with Hubble, Spitzer and the Large Binocular Telescope (LBT) until 2014  nearly two decades after the initial transient! They find that the object has continued to dim far below the progenitor’s initial brightness, as seen in Figure 1. In fact, there is no highly significant detection of the star in any of the images dated between 2013 and 2014. If the star is no longer detectable, what does that mean about SN1997bs? The authors discuss a few theories:

Light curve of SN1997bs

Fig. 1: Optical light curve of SN 1997bs. SN1997bs appears to dim past its progenitor’s original brightness (shown in dotted lines). Here the V (squares) and I (triangles) optical bands are shown, including a V-band upper limit from LBT.  All of the points past 1000 days are all upper limits on SN1997bs’s brightness.

  • Could SN1997bs be a “canonical” supernova impostor? Perhaps SN1997bs is an Eta Carinae-like star and now finds itself in a dirty web of dust. The authors say that this is unlikely because the star would be expected to eventually brighten in redder light (or to “redden”). Instead, the star appears to get bluer and then disappear altogether.
  • Forget the dust…what if SN1997bs is a “tuckered-out” impostor? The star may have erupted and then declined into a much darker state 30 times fainter than the original star! Currently, there is no known mechanism that can dramatically reduce the star’s intrinsic luminosity after a single event. If anything, we would expect a star’s outer envelope to expand during the eruption and lead to an even brighter star. Either some new astrophysics is happening or…
  • SN1997bs may be a true type IIn supernova? The authors argue that the simplest solution to a disappearing star is to assume that it is truly gone. Although SN1997bs would be an extremely subluminous (or low energy) supernova, the authors argue that other faint core collapse supernovae are just now being discovered, and the energy ranges for these events may be broader than we originally thought. Additional factors, such as obscuring dust, may contribute to underestimating the energy of this initial supernova.

In order to be sure that SN1997bs was an authentic supernova, we need better observations to confirm that the star has vanished. Although a compact remnant, like a neutron star, might exist after this explosion, it would be far dimmer than the original star. The uncertainty in SN1997bs’s real identity highlights a fundamental flaw in how we label and study impostors: What if SN1997bs, an archetype of supernova impostors, isn’t an impostor after all? As our knowledge in this field grows, we need to be careful about classifying these eclectic “impostors.”

Rosetta shifts from sedate circular orbits to swooping flybys by The Planetary Society

For the period of time before and after the Philae landing, Rosetta was able to orbit the comet close enough that it was in gravitationally bound orbits, circling the comet's center of gravity. As the comet's activity increases, the spacecraft has to spend most of its time farther away, performing occasional close flybys. The first of these is at 6 kilometers, on February 14.

February 10, 2015

We need to talk about Peter Parker. Probably. by Feeling Listless

Film Well that's quite a few days. Big Finish licenses from nuWho, Australia confirmed for Eurovision and now Spider-Man to appear in the Marvel Cinematic Universe. In terms of personal wish lists at this rate, by the end of the year, Mutya Keisha Siobhan will have an album out by the end of the year and Marco Polo will be released on dvd. The Doctor Who one. Even this list doesn't look so stupid now:

The Beatles are added to Spotify.

Taylor Swift does Glastonbury.

Greens overtake the LibDems in Parliament.

The next series of Doctor Who is better than the last one.

Spider-man joins the Marvel Cinematic Universe.

That's my list of annual predictions from the beginning of this year. The way of the wind and it is blowing the least likely item's probably Doctor Who being any good this year. Former festivalphobe Swift's playing the Big Weekend. That has to be a prelude to something.

Either way yes, hello Spidey and welcome to the same filmic reality as Groot, Trevor Slattery and Fitz/Simmons.

There's loads of speculation across the echo chamber of the many news websites with advertisers and an SEO commitment but we don't really know a lot yet.  We know that he'll appear in an already announced film first before starring in his own property in mid-2017 pushing back all the other films already announced that aren't The Avengers or Guardians of the Galaxy 2.  I'm going to be in my mid-forties before this release schedule ends now.  Let's not think about that.

But other than that.  Shrugs.  Not a sausage.   Here's what I think:

For all my bitch and moaning about how perfect he is, Andrew Garfield will not be retained.  My taste would be for the earlier Sony films to be co-opted into the MCU for all their creative problems, but it looks like that story won't be completed.  Felicity Jones will not be Black Cat now.  Unless she is.  But imagine how he feels right now.  This guy.  The guy who turned up at Comic-Con and did this:

Not that you couldn't retain Andrew Garfield.  You could approach the next one as a semi-continuation, taking the imaginative leap that those events happened just in slightly different way, ala The Incredible Hulk and somewhat Superman Returns.  Or if they really wanted to spend the money, producing special new version of the Sony movies with more MCU content shoehorned in.  Or just keep him, whatever.

Ah fuck it.  Andrew Garfield and everything else is gone.  They're burning it all down.  Shame.

But who for Parker now?  An unknown presumably though that's not always the approach in the MCU.  Eddie Redmayne's not a stupid idea, I suppose, but they may want to cast an American.  Zac Efron?  (I'm really bad at this).  Whoever it is, clearly the brilliant thing to do would be not to announce his casting then have that moment in Civil War when Spider-Man removes his mask and that's the moment when we all find out who's playing him.

Is this just Spider-Man or is it the whole thing?  Can aspects of his mythology now reintegrate with the MCU?  Will we have the Daily Bugle in the Netflix series set in New York?  Let's hope so.  Would make sense.

Are we going to get another version of the origin story, for the third time in just a decade an a half?  Probably not.  Unless we do.  My guess is that even if its not a backdoor TAS3 somewhat like The Incredible Hulk and somewhat Superman Returns there'll be an expectation that the audience gets it already and the salient points will appear in the credit sequence or flashbacks.  Forbes suggest Batman Forever as the template.  Let's not get carried away.

For all that, will it even be Peter Parker or will it be Miles Morales?  My guess is it'll still be Peter.  He's still the brand name most widely associated with the character.  Clearly the MCU approach and the more interesting, fresh choice would be Miles but it seems very unlikely.  But we also live in the world were a Guardians of the Galaxy film had almost the greatest box office take of all time so, well, shrugs.

For now, let's just all bask in the glory of this old Late Review transcript in which Germaine Greer, Charlie Higson and Michael Gove review the first of the Raimi films.  Take it away Govie:

It's for teenagers, because it has a genuinely, I believe, sophisticated morale structure than most films aimed at that age group, and without it being explicitly Christian, I do think it has that theme running through it.

When the Green Goblin takes Spider-Man up and shows him New York City and says, "My boy, this could be yours if you join me in this wicked project" is reminiscent of the Bible.

And the final scene is the renunciation scene. It's an affirmation of celibacy and vocation.
It's worth reading for Higson's rebuttle. Well played Higson.

NYC Should Tax High End Absentee Apartment Buyers by Albert Wenger

The New York Times has been running a great series about high end real estate ownership in the city titled “Towers of Secrecy”. The upshot is that much of it is owned by people who are rarely if ever here. I believe there should be a special tax on apartments that owners spend very little time in and that tax should be heavily progressive, meaning get a lot more expensive the higher the purchase price.

The tax should be high enough to actually deter some of these purchases. Why? Because it is not healthy for the city to have a lot of empty high end real estate. First, the prices paid a the highest end while somewhat disconnected still pull up the rest of the market (if only through signaling effects) making New York City less affordable. Second, entire high rise buildings with few lights on in them at night have a kind of ghost town effect.

We have seen this firsthand for years in West Chelsea. The tower at 200 11th Avenue has a lot of out of town celebrity apartment owners. Hardly any of them are ever there with maybe a couple of lights on at night. It feels dark and vaguely post-apocalyptic. Parts of London, such as Belgravia, show what this can look like in the extreme.

For whatever purchases it does not deter the tax would produce income for the city of New York, which wouldn’t be a bad thing either. For instance, we could invest it into better better and more affordable broadband for all.

PS Jonathan Glick rightly points out that if the deterrence is too successful there will be less taxes in aggregate. I replied that I expect there is a Laffer curve here. But in any case this is an example of taxing an activity that has negative externalities which is exactly how taxes should be levied. 

Just How Dark is Dark Matter? by Astrobites

Title: Glow in the Dark Matter: Observing Galactic Halos with Scattered Light
Authors: J. H. Davis and J. Silk
First Author’s Institution: Institut d’Astrophysique de Paris

Dark matter has only been detected via its gravitational effects. These effects include gravitational lensing and galaxy rotation curves, both suggesting much more mass in galaxies and galaxy clusters than we can see. The most popular explanation for these effects is that about 80% of the mass in the universe is in the form of weakly interacting massive particles (WIMPs), which do not interact via the electromagnetic force, and thus do not interact with photons (hence, dark). To test this, Davis and Silk propose a new method to quantify the dark matter-photon interaction by looking for extra scattered light in haloes of galaxies.

Screen Shot 2015-02-08 at 8.32.46 PM

Figure 1: If dark matter and light interact, photons from the bright central regions of the galaxy (blue dashed line) can scatter off of dark matter (light blue dot), deflecting towards the observer. This scattered light adds to the light coming from the outer regions of the disk (orange line), causing an apparent brightening of the outer disk.

If dark matter does interact with light, a good place to look for this signal is in the outskirts of spiral galaxies, where rotation curves tell us that dark matter dominates the mass. Here, a dark matter particle might deflect, or scatter, a photon from the bright central regions of the galaxy into our line of sight. This idea is shown in Figure 1. This effect will cause the outer regions of the galaxy to appear slightly brighter than they really are. Measuring this brightening gives an estimate of the probability of interaction, called the cross-section, between dark matter particles and photons.

Screen Shot 2015-02-08 at 8.33.05 PM

Figure 2: Surface brightness profile of M101. Black points are the data from the Dragonfly Telescope Array. The dashed lines are models for the stellar halo and spiral disk. The red line is the contribution due to modeled dark matter scattering, corresponding to the upper limit for the dark matter-photon cross section. The green line shows the combination of all modeled components.

The authors apply this idea to the case of spiral galaxy M101. Using measurements from the Dragonfly Telescope Array, the authors fit the optical M101 luminosity profile with an exponential model for the disk and a stellar halo contribution, assuming the dust scattering contribution is negligible. Because the stellar halo also extends to large radii, it is difficult to distinguish between light from halo stars and scattered disk light. Thus, the authors find the maximum contribution to the luminosity profile from scattered light that is consistent with the measurements, shown in Figure 2. This gives an upper limit on the dark matter-photon scattering cross-section of σ(DM−γ) < 10-23 (m/GeV) cm2. The best indirect upper limit for this cross section is almost 10 orders of magnitude smaller than this! To argue that this method is worth exploring, the authors introduce the idea that cross section may increase with the wavelength of light scattered.

Screen Shot 2015-02-09 at 12.32.17 AM

Figure 3: Spectra of the different sources of light in the outskirts of galaxies. If the dark matter-photon cross section increases towards longer wavelength, light scattered by dark matter (red line) can be more easily distinguished from halo stars (blue line).

In some models of dark matter, the scattering cross-section increases with the wavelength of light scattered. In this case, the light scattered by dark matter will have a different spectrum than the light from halo stars, or light scattered by dust. At longer wavelengths, the light scattered by dark matter will become more important relative to the other contributions in the outskirts of galaxies. This point is illustrated in Figure 3. If the wavelength dependence goes as σ(DM−γ) ~ λ4 , then similar measurements of the outskirts of galaxies in the infrared could provide the best limits yet on the scattering cross-section. Also, measurements at multiple wavelengths can constrain the wavelength dependence of the scattering cross-section, potentially ruling out certain flavors of dark matter.

This paper is a proof-of-concept that deep measurements of the outskirts of spiral galaxies can place a limit on the interaction of dark matter with light.While the authors come up short of the best upper limit, this technique can be extended to longer wavelengths and larger samples of galaxies, which should provide a better discriminant between light scattering from dark matter and other sources of light. This can be used to distinguish between different models of dark matter particles. Perhaps, with similar methods, we may one day detect a glint of light from the darkest stuff in the universe.


A new mission for Akatsuki, and status updates for Hayabusa 2 and Chang'e by The Planetary Society

Brief updates on four ongoing missions: JAXA's Akatsuki and Hayabusa 2, and China's Chang'e 3 and Chang'e 5 test vehicle. JAXA has articulated the new science plan for Akatsuki. Hayabusa 2's ion engines have checked out successfully. The Yutu rover is still alive on the Moon, and Chang'e 5 test vehicle has successfully tested crucial rendezvous operations in lunar orbit.

Two Days, Two Launches and Three Landings by The Planetary Society

Within a two-day span, two rocket launches and three ocean landings are scheduled—one of which involves an autonomous spaceport drone ship.

Planet Formation and the Origin of Life by The Planetary Society

To understand the possible distribution of life in the Universe it is important to study planet formation and evolution. These processes are recorded in the chemistry and mineralogy of asteroids and comets, and in the geology of ancient planetary surfaces in our Solar System.

February 09, 2015

Big Unit Finishing News. by Feeling Listless

Audio Of course the biggest news of the day was Big Finish's announcement that they'd licensed Kate Stewart to appear in a series of UNIT boxed sets with Jemma Redgrave signed to reprise the character, the first time they've specifically been able to make something using material from the new series that wasn't a BBC Audio(Go) commission.

Look at that cover. Look at it.

Some notes:

(1) Pertwee logo.

(2) NuWho Auton design.

(3) Doesn't that look like Captain Jack's jacket?

(4) How will this sound? Like the recreations of 70s Fourth Doctor stories, will these attempt to do the sound of the new series, music and all? How about the tone of the scripts?

(5) Will they refer back to earlier UNIT series? Kate's Dad was a huge figure in the series which ran in the early naughties. You can listen to the prequel for free on Sound Cloud here. When the synopsis talks about her UNIT team, will this be Colonel Emily Chaudhry et al or will they be really daring and Yates back? In the Big Finish continuity he's back in active service.  When will it be set?  Can Osgood return?

(6) Could this be a prelude to something else? Eccleston stories are clearly out of the question but could David Tennant return? Matt Smith? Cor.

The BAFTAs 2015. Again. by Feeling Listless

Film For all my bitching and moaning and thinking and praying, with the end of the BAFTAs in actuality and the start of the edited highlights only overlapping by about twenty minutes, I was only spoilt via social media for three categories, Best Actor, Best Supporting Actor and Best Picture, but I didn't care. Because it was Boyhood, because it was for once the film I wanted to win everything, it just meant I could have a big cheer early in the evening and another one later.

Criticise Stephen Fry all you want but other than Norton I can't think of many other presenters who quite fit. He's generally loved by the people in the hall and while its true we'd love to have someone as edgy as Tina Fey and Amy Poehler presenting doing that sort of material, Mail related fear means the BBC wouldn't dare. Even when Ross presented it briefly, the results were hardly the Comedy Awards. No Garai moments this year either.

No Rock and Roll Fun has a perfect review of the Kasabian incident. These opening musical numbers are becoming increasingly problematic; Bafta lacks a Best Song category but for some reason they give us a track anyway and usually from someone with little or no connection to the film industry and in a ceremony which is already being edited down to fit the timeslot.  Surely the time would be best spent giving out an otherwise squeezed award like Best Animation?

There were less of those this year, I think.  Best Film Not In The English Language turned up in the main body for the first time in a while, so Mark Kermode wasn't called upon to incongruously hand it out.  I was pleased Ida won.  I've not seen it but I've otherwise been a fan of Pawel Pawlikowski especially since he was the reporter who was being cued in during the autocue balls up on The Late Show which appeared on the A-Z of TV Hell ("Pawel Pawlikowski reporting... err...")

This was the Tweet of the night:

Having ignored the shorts, my guesses ended up with 8/16 thanks to a late surge in the squeezed categories at the end with all The Grand Budapest Hotel wins (which was my default choice in categories where Boyhood wasn't nominated or I hadn't seen any of the other films).  Which is better than usual.  About the only category I was cheesed about was Gugu Mbatha-Raw.  Jack O'Connell seems like a talented chap but a win by Gugu would have been more symbolic.

Nevertheless, Baftas garlanded Boyhood in all the right ways and that we now live in a world were Patricia Arquette is a BAFTA winner is all to the good.  Hers was an attenuated speech about how some careers seem to become golden in retrospect, fitting with the theme of the film which to an extent is all about the careers of the actors across those twelve years, ebbing and flowing in between that other single endeavour.

Then there was Ellar Coltrane's speech.  Of all the actors in the piece. he's arguably the one who was most vulnerable since he was the one put in that central position for all those years and in whom the whole project rested.  If he had decided not to continue the film would have ended, yet he carried on even when he wasn't happy and if there'd been justice he'd have been nominated in the best actor categories at the major rather than minor awards and won a few of them.

That's that for another year.  Will they finally be live next year?  Don't know.  One approach could be to make sure the show itself ends before the BBC show starts so that everything can effectively reset on the Twitters (as happened last night to a degree when some news feeds effectively pretended they were reporting on the wins during the tv broadcast even though they were technically repeating themselves).  We'll see.

My Country Tis of Thee by Charlie Stross

Normally, I do this kind of thinking-out-loud on my own blog, where about thirty people are paying attention. But then Charlie said "hey, I've got this guest spot, come make yourself vulnerable visible here!" And sure, why not?

Hi, my name's Laura Anne, and while in the past I've mostly been known for urban fantasy (of the modern-magic-and-mystery variety) and the fact that I convinced a publisher to pay me to write three books about wine-based magic (and got a Nebula nomination for it!), my next project decided that it was going to drag me screaming and kicking somewhere slightly more problematic: American history.

Now, the talk in genre these days is about diversity, calling for more characters of color and alternative cultures, and more writers of color and non-Western backgrounds.  And I'm 100% behind that  - not because I'm a guilty white liberal.  Because I'm needy.

There.  I admit it.

Yes, literature - genre or mainstream - is a mirror.  We look into it to see ourselves, through whatever reflects back. And that's why it's important for there to be diversity - so everyone gets a chance to see themselves.  But literature is also a window.  It's how we see things that aren't us, that bring new views, new light into who we are

So I want to see more stories set in Asia, in Africa, in Latin America, in cultures that aren't mine, with characters who aren't me, in race, religion, color or sexuality, because they let me see something else, something I can't get any other way.  I need more of that, please!

But where does that call for diversity, and cultural authenticity, leave me as a writer?  I'm of mixed and muddled background - four different bloodlines each carrying several different countries on their backs and continents in their wake.  But for me to claim one of them as my mirror?  Would be false, because I'm not a member of those cultures: I'm American, three generations deep.   So how much of American culture can I claim? 

The modern side, absolutely - I've spent ten books, three novellas and a number of short stories writing about the American immigrant and integration culture through the Cosa Nostradamus novels. I think I've done a reasonably good job, there.

But what about where all that formed? If my next book were, say, set in a divergent history of pre-1800's North America - is that my culture? Is that my mirror, too? Or is it a window?

Was I appropriating something that didn't belong to me?

That's a thought to stop a writer dead in their tracks, if we're being honest. Both the fear of being called out for it, rightly or wrongly, and the inevitability of getting it wrong, because short of growing up immersed in something, we WILL get things wrong, and "but alternate history!" only buys you so much wiggle room.

But I had a story I needed to tell, things I needed to say, and this was how they were going to be told and said. So it was important for me to figure it out.

The book - the working title was THE DEVIL'S WEST - plays with the idea that rather than the Louisiana Purchase of 1803 that doubled the size of the United States with a pen stroke and a large check, that area was left in the hands of an entity known in the newspapers of the time as The Devil: an entire Territory within which the tribes remained unmolested, and any people of any nation who wanted to settle there needed to play by the devil's rules - or else.

I thought I could write my main character Isobel's point of view reasonably well - she's a first generation immigrant who thinks of this land as her own, as her home. But what of those she encounters, on both sides of the colonization argument?

The real history that created this setting is at heart the history of the hundreds of tribes and dozens of confederations that western incursion pushed to the side. In that sense, I am the outsider, the observer through the window. And for a non-Native American, writing about that portion of our history is deeply problematic on pretty much every level. I could not imagine what life had been like back then; the truest histories are locked away from me by nature of my skin and language, and I have access only to the things that were written down and shared - and too often when they were shared, elements were reserved, lost, or destroyed. I could only be true to what I could see through that window, and be aware that there was much beyond that window, out of my sight.

But the settlers who chose to risk, to go to a new world and find their fortune and their future, knowing that they left all security and certainty behind? I knew those people, though these were none of mine, coming predominantly from western and northern Europe. I knew what it felt to hope for a welcome somewhere else, to plant the seeds of your future in that hope - and to arrive only to discover that the streets were not paved in gold, but rather hardship and distrust. There was my mirror, the familiar things I can study, and know.

And so that was how I approached my research, and my story: as a mirror reflecting into a window, casting a third image. And there were a lot of days I couldn't write, because what I'd researched required me to tear up things I'd thought I'd known, or stop and process something I hadn't known. And knowing that this was necessary, for the story and for my own ability to tell the story, ended up being small comfort.

Did I stay on the line of respect and accuracy, while playing with real history, and real cultures? I hope so - for my sake, and the sake of those who shared their knowledge with me.

Would I do it again? In a heartbeat. Because in a very real sense, that is the legacy of my nation, that is the culture I've been born into; problematic from the start, sometimes blending and sometimes clashing, the things that are good and the things that are bad, the things we are proud of and what we regret. My country, tis of thee I write.

As to how the whole thing turned out.... That, the future will have to tell (October 2015, to be exact).

February 08, 2015

Rising Stargirls: Girls of All Colors Learning, Exploring, and Discovering by The Planetary Society

Aomawa Shields discusses a workshop she designed for underrepresented girls in grades 6-8 that will teach key concepts in astronomy, highlighting what is beyond what we can see with our eyes, using nontraditional methods.

February 07, 2015

The Faces of Publishing by Charlie Stross

At the start of this year Rachel Manija Brown and I decided to self-publish Hostage, the second book in our YA dystopia series. The long explanation is here.

Some people applauded, others shook their heads, but most discussion has not been about our books so much as about publishing in general. Underlying that I think is the anxiety many us writers feel about how fast publishing is changing, and what it all means for each of us.

Maybe it's just because I've always been a history geek, but the more I talk about this stuff, the more I'm reminded of the ways people dealt with the rapid changes of publishing during the wild days of the early novel, specifically in England. (Yeah, I know that Cervantes, and Madame de La Fayette, etc, were all early novelists, but I mean the eighteenth century when novel publishing went from a few to hundreds and beyond over a matter of decades. Kind of like genre books went from a few a year during the fifties and sixties, to hundreds a year, and then thousands.)

The way I see it, right before that, England's (after 1707 the UK's) publishing history divides off from the rest of Europe with two big changes: after 1695, the Licensing Act was let lapse, and in 1709 the first copyright law passed. Before then, like the rest of Europe, political ups and downs were reflected in the struggle between printers and booksellers for ascendancy in restricting or promoting publications, while the government tried with varying success to control the whole.

That's not to say that after 1695 the English government didn't give up oversight. They brought the hammer down against blasphemy, obscenity, and seditious libel--in 1719, the last man swung from the gallows for printing Jacobite material--but it was increasingly a rearguard action.

Like now, there were ripoff booksellers masquerading among the legitimate ones, though today's scammers (see Writer Beware) are rarely as colorful as the rascally Edmund Curll -- printer, pirate, and pornographer. He stole material with flagrant disregard for copyright. As soon as some prominent person died, he collected gossip -- it didn't matter if it was true -- for a biography, and if he didn't have enough material, he made it up. Prominent people reportedly dreaded dying because of what Curll would do to them. A faint echo of the Curll treatment occurred a couple weeks ago, when Colleen McCullough's obit started off by noting how fat and unlovely she'd been.

Curll churned out so much X-rated stuff under various guises that the word 'Curlicism' became synonymous with porn. Prison, a stint in the stocks, even being blanket-tossed and beaten by the boys at Westminster school not only didn't stop him from theft and libel, he turned them all into marketing opportunities. Even when he was convicted of libel and forced to publish an apology and a promise to stop printing, his repentant words touted his latest books.

He's best known for the twenty-year running duel with the poet Alexander Pope, from whom he not only stole, he lampooned under his own name and with sockpuppets. It began when he first pirated Pope, prompting the poet and his publisher to meet Curll at the Swan, where they slipped a mega dose of "physic" (think ExLax) into his drink. He turned that, too, into a marketing event, once he'd recovered from the extremes of ejecta; when Pope published a couple of triumphant pamphlets, claiming Curll was dead, Curl came right back with new material demonstrating that he was very much alive and up to his usual racket.

Their history--and there are other equally crazy-ass stories--remind me of the whoops and hollers of internet feuds and FAILS now, among writers, editors, publishers (some individuals wearing all three hats).

Aside from the Curlls, most booksellers, the publishers of the eighteenth century--like the editors working at traditional publishers now--were hardworking people who made careful decisions about what to publish because they were the ones fronting the costs of printing and of copyright.

The booksellers of Grub Street were all about copyright. For most of the eighteenth century, they met yearly, over sumptuous dinners, to hold a copyright auction that was exclusive to the booksellers. Interlopers were unceremoniously chucked out.

Of course for every success there were misses, such as booksellers refusing refused to pay the asked-for five pounds for the copyright to the satiric poetry of the rakish clergyman Charles Churchill--who then self-published his Rosciad, clearing a thousand pounds in two months.

Most of the time the booksellers knew a good bargain when they saw it, such as John Cleland's lubricious Fanny Hill, whose copyright sold for 20 guineas. The Griffiths brothers, booksellers, cleaned it up a little, turned around, and raked in ten thousand pounds' profit -- before they all were arrested. Then, of course, it was pirated.

In 1740 Samuel Richardson's mega-hit, Pamela, started out as a work-for-hire piece jobbed out by booksellers Osborn and Rivington, who wanted a series of morally instructive letters. It quickly turned into a phenomenon reminiscent of reality TV, complete to heavy merchandising: Pamela fashions, dishes, etc.

In 1809 Jane Austen, using a pseudonym, wrote a crisp letter to Benjamin Crosby of Crosby and Sons demanding the return of the early version of Northanger Abbey when the bookseller lagged without printing the book for six years. His answer back was equally crisp: pay me back the ten pounds I paid for the copyright and it's yours. (She wasn't able to do that until 1816, not long before she died.)

The easiest way into print was through the periodicals, but self-publishing could be accomplished by anyone who managed to beg, borrow, or buy a printing press, like Horry Walpole, who was such a snob that he had to do everything himself to insure his printed matter came up to his standards of good taste; he dismissed with lofty contempt the very idea of making feelthy lucre off his books, which he largely gave away. For those who couldn't buy a press and had no issue with feelthy lucre, there was always the subscription method, which today we call crowd-funding.

Early in the century Grub Street was an actual place (Milton Street in the Moorfields part of London), most booksellers having their shops in that part of London; by the end of the century publishers were everywhere. Adventuresome booksellers in Ireland energetically competed with those in England, such as when a team of covert ops printers scored the sheets to Sir Charles Grandison from Richardson's own press and smuggled them to Dublin, where the book came out before its author published his legit edition in London. Copyright theft was flagrant -- especially in the nascent States: Frederick Marryat talks in his memoir about the stinging irony of fans coming up to him in New York to rave about his books, for which he never received a penny. His daughter, a generation after, expressed the same regret in her memoir.

Piracy -- copyright -- how one got into print all seems to me to reflect patterns, then and now. But there's more. Until the eighteenth century, art (which included literature, which in turn was beginning to include literature's raffish bastard, novels) had been the preserve of kings and church. During this century it became the property of a larger public.

At the same time, a revolution in perceptions of privacy was also going on. The tension between the interior self and the public face, what is art and who gets to define it, is still going on three hundred years later. We call that outing and doxxing. There is also -- still -- a tension between originality or novelty and the sure sell; a struggle between what is literature and what is trash, who claims authority to establish standards, and how all these are marketed.

One heat-inducing question that has come up again in recent years is the question of patronage. Patrons have a tendency to want to get their money's worth, if they aren't tampering with the products they patronize, at the very least by only choosing works that appeal to their tastes.

In the 18th century, not surprisingly, wealthy noble patrons favored work that supported the aristocratic ideal, claiming that this was art as opposed to popular trash. If patrons didn't actually contribute cash to needy authors, they could influence the marketplace by using their rank, wealth, and position in promotion.

Jump up 300 years to genre.

It's tougher to sell mixed genres to traditional editors today, though it's happening, whereas it was nearly impossible twenty years ago. I don't believe the reason is fear of taking risks -- there are some terrific books coming out from all the major publishers that push boundaries all over the place -- so much as the question of marketing is for them still tied to the printed book, specifically to slots at the bookstore. Where do you put something you can't easily categorize?

But this has got pretty long. Rachel is going to talk more, specifically self-publishing and why people do it, on the 11th.

Sherwood Smith

How I'm coping. by Feeling Listless

Life A couple of weeks ago I asked a question. The question was about how, now that everything is available (pretty much) how do you cope?

I genuinely needed some advice. I even asked Metafilter too.

The result was a range of replies on Twitter, three whole comments on the post itself (which is some sort of record for this place) and tons of ideas on said community discussion board although in the end (after my clumsy editing of the blog post) they decided I probably need therapy.

After collating all of this, here's what I've done and am doing:

(1) Deleted online backlogs. Pretty much. One afternoon I went through Pocket, read a couple of the longer articles I'd been looking forward to about film and deleted everything else. Same for YouTube playlists.  Just about.

(2) Film wise: one of the Ask Metafilter people said quite rightly that the pile of blu-rays I have are bought assets so I should prioritise them. So I am between ...

(3) Watching Lovefilm disc when they come in. Which seems counter intuitive but it gets me out of the "list" culture (Disney in order etc) and also takes the choices away from me to a certain degree.

(4) Box sets: if I've bought them, I should watch them. Still working through Alias. Then Yes, Minister. Then Northern Exposure. But slowly.

(5) Television: only if its something I'm genuinely interested in and make fewer appointments to watch. That's what catch-up is for. But if I miss something it's not the end of the universe.  Plus television series, even the quality programmes, are ultimately in the end saying the same thing over and over again.

(5.1)  Television documentaries as a format are broken.  Too often we'll be forced to sit through hour long documentaries with about half an hour's material, even less, stretched to fill the duration, or even an hour's worth of research pondered through three hours because the structure is dictating rather than the detail of the subject.  The problem is you sometimes don't realise that until half an hour in.  Don't know what to do about that other than to stick with presenters I like or tested formats.

(6) Reading: books in the house, magazines I've bought (unless spoilers). I've dramatically cut down on the RSS feeds I read.  Since I'll never know everything, I'm trying to learn that it's ok not to even try sometimes.

(7) One of the best pieces of advise I had was from the commenter mumoss on here:

"In the end, I've thrown up my hands and given up. Experience all you can, try not to get upset at what you don't. There's nothing you can do about it. Just try and be joyous that we live in a world where this is a problem. I'd rather have this issue than being bored for the rest of my life."


The underlying problem is still there.  If you're interested in everything, you can lose sight of what you're really interested in, but I'm hoping that will slowly emerge.  Again.

Sneak a peek on the cause of death of protoplanetary disks by Astrobites

Theory and experiment in astrophysics – a love story

The classical story on the interplay of theory and experiment in science is straightforward. The theorist rummages around on the black board until he can present a new ‘hypothesis’. An essential (and often neglected) point is that the new idea makes falsifiable predictions. The observer (in astrophysics) then checks if the idea is present in nature and can be distinguished from other effects. If yes, everybody is happy and science once again won a glorious battle! If no, the story starts from the beginning. Unfortunately (speaking as one), theorists sometimes tend to glorify the majesty and complexity of their models and forget to check if these are (still) in line with experiment. People in science tend to think differently about this topic, and the rise of computational methods in the last decades provides new challenges to this centuries-old way of advancing natural sciences.

However, today’s featured paper stands as a role model in that respect and tries to quantify measurable predictions for the chaotic movement of particles, also known as turbulence, in protoplanetary disks (where young planets are formed), which is thought to be the cause of their ‘short’ (in astronomical terms) lifetimes of millions of years.

Ingredients for the recipe

Turbulence in general can originate from different effects. One is the interaction of gas with magnetic field lines (magnetohydrodynamics, ‘MHD’), which are present in the protoplanetary disk. To be able to make predictions for the influence of magnetic effects on the velocity distribution (i.e., the dynamics of the gas as evidence for the turbulent motion) in- and outside of the disk, the authors of this paper run several local MHD simulations by using the Athena code.

Figure 1: Cartoon of the layered structure of a protoplanetary disk, indicated with zones where some MHD effects take place and from where CO line tracers originate. FUV means far ultraviolet and the corresponding zone is where the particles in the disk are ionized by the irradiation from the star. In the ‘ambipolar damping zone’ these ionized particles slow down the non-ionized ones from deeper regions via collisions. The different CO regions correspond to layers where different CO isotope transitions can be observed. (Simon et al., 2015)

‘Local simulations’ means that they do not concentrate on the disk as a whole, but instead on a specific region of it (for an image of such a ‘shearing box’, see the first author’s website). To get a better idea of what’s going on, Figure 1 illustrates the disk structure and the zones where the MHD effects may play a role. Especially important here are the regions labeled with ‘CO’, which is where the turbulence in the disk interacts with the CO molecules in the gas. This is where the interplay of theory and observations joins the stage.

Synthetic spectra HD 163296

Figure 2: Synthetic observations of the spectra of different disk models of HD 163296, rated by strength of turbulence, which decreases from top to bottom. The flux gives the number of photons we observe from the emission line. Instead showing a single peak, the velocity distribution in the disk (one part of moving towards us, one  away from us) produces a ‘broadened’ spectrum with a dip from the inner parts of the disk. The overall shape of the line is further affected by the stochastic motions of turbulent gas, which to distinguish between models with and without turbulence. (Simon et al., 2015)

Dare to forecast!

The turbulence, caused by the magnetic fields in the disk, bends the dynamics of the gas in such a way that it becomes random in some areas. Therefore, the molecules in the gas show a different motion than without it. This change in velocity (simulated with the MHD code) is then transferred, by making use of the LIME code, into so called mock-up observations of protoplanetary disks. These are simulated observations of what kind of light specific parts of the disk emit and how this looks like through a spectrometer. Molecules in the disk emit light, when one of its electrons in an excited state transitions from a higher to a lower energy level. However, turbulent motion and other effects ‘broaden’ the spectrum we receive from the disk, as you can see in Figure 2. It shows modeled spectra for the protoplanetary disk around the star HD 163296. \beta_0 and \Sigma_{FUV} essentially tell us the influence of turbulence in the simulation, starting from strong turbulence (top) to zero turbulence (bottom). You see the difference in the model shapes, which can be detected by observing the real spectrum of such a disk.

This is why the CO zones in Figure 1 are important: the emissions from this molecule act as a ‘tracer’ to probe different layers of the disk and allow to quantify the influence of turbulence on the gas in these. However, such line observations need an instrument featuring a sensitivity high enough to detect those tiny variations in the incoming light. The author’s telescope of choice will therefore be the famous Atacama Large Millimeter Array (ALMA), a network of telescopes in the Chilean mountains. Since some time ALMA is in its science phase and can finally be used to take awesome pictures. The plan for the future is now to use the simulated line tracers in this paper to unveil signatures of MHD turbulence in the disk around HD 163296. Thus, we will soon be able to shed some light on a main driving mechanism of accretion disk dispersal.

Ceres coming into focus by The Planetary Society

The Dawn mission released new images of Ceres yesterday, taken on February 4, when Dawn had approached to within 145,000 kilometers. More details are coming into view, and they're fascinating. For one thing, there's not just one white spot any more: there are several.

FOIA Request Sheds Light on NASA Mission Extension Process by The Planetary Society

A FOIA request offers insight into NASA's planetary science extended mission review process, which seems, at best, confusing, and at worst—with adjectival ratings like “Very Good/Good”—arbitrary.

February 06, 2015

Editing Selma. by Feeling Listless

Film The Credits has an interview with Spencer Averick describing the process of editing Selma which has some useful material about how an editors choices influence the director and vis-versa:

And once production is over, how do you tackle all the scenes you’ve now strung together?

Then I get back to LA and I have a week or two to continue editing by myself without Ava, to finish the editor’s cut, trying to keep up. When they shoot three scenes in one day, I’m trying to edit three scenes in one day as well, which is hard to do, so you fall behind. So those couple weeks after you get back is your time to catch up. By no means is the editor’s cut great, but the movie’s at least put together, although it’s long and it’s rough. Then Ava comes in after two weeks, she watches the editor’s cut, then we go from there. Then it was five months for the two of us. We had our producers, Plan B, Oprah, and Paramount in dialogue and giving us feedback as well. It’s a collaborative experience, but mainly it’s Ava and I together."
Auteur theory tends to consider films in relation to the director or producer or even studio in regards to modes of production. Averick has only really worked with Ava DuVernay, but on a wider note, I wonder about the extent to which you can tell whose edited a film through watching, if there are particular editing styles.

Good News on Net Neutrality by Albert Wenger

Yesterday FCC Chairman Tom Wheeler announced that he would be pursuing Title II classification for wired and wireless ISPs as a way to ensure net neutrality. This is potentially a big step forward on a topic that I have written a lot about here on Continuations. The devil is of course in the details and there is a big fight looming with some of the large cable companies. So we won’t know what we actually get for quite some time.

In the meantime a this is a good moment to thank the organizations that have worked tirelessly to keep this topic on the minds of Americans and thus pressure on the FCC. In particular I want to call out Fight for the Future and suggest that you make a donation to help support them. 

PS After you have done that you might also want to re-watch John Oliver’s epic segment on the topic which is still funny.

Rings around another world may have been sculpted by exomoons by Astrobites

Title: Modeling giant extrasolar ring systems in eclipse and the case of J1407b: sculpting by exomoons?
Authors: Matthew A. Kenworthy & Eric E. Mamajek
First Author’s institution: Leiden University
Status: Published in ApJ

Rings have been detected in another Solar system and, just like the rings of Saturn, they may have been carved out by exomoons.

For more than twenty years we’ve known of the existence of other Solar systems; we’ve found thousands of wacky star-planet, star-planet-planet, star-star-planet systems. So many, in fact, that they’ve started to become a little pedestrian. Oh, you’ve found another exoplanet? Not another hierarchical triple? Yawn… But this system wins the ‘interesting award’: It has rings, or, to quote the authors, an “exosatellite-sculpted ring structure”.

Figure 1 - the light curve of ….. The panels shows the data (red) and one of the models that was fitted to the data (green). The top panel shows the entire light curve and the lower panels show snippets of the light curve at the positions marked by the black triangles in the upper panel. The model doesn’t do a perfect job of matching the data - this is a pretty complicated model with lots of free parameters, so it’s a really tough problem to solve!

Figure 1 – the light curve of J1407 seen by SuperWASP in 2007. The panels shows the data (red) and one of the models that was fitted to the data (green). The top panel shows the entire light curve and the lower panels show snippets of the light curve at the positions marked by the black triangles in the upper panel. The model doesn’t do a perfect job of matching the data – this is a pretty complicated model with lots of free parameters, so it’s a really tough problem to solve!

Unlike a normal exoplanet transit, where a dark disc (a planet) passes in front of a bright disc (a star), producing a simple, single dip in brightness, these astronomers observed a very different phenomenon back in 2007. Several deep dips in the brightness of the star ‘1SWASP J140747.93-394542.6’ (J1407 for short), were observed by the SuperWASP telescope, one after the other, lasting a total of 56 days. These dips clearly weren’t caused by an exoplanet occulting J1407, they were too irregular and too long in duration. The authors of this original discovery paper suggest that they were caused by a huge disc of material that orbits another, unseen star or planet, which in turn orbits J1407. It was this disc that passed in front of J1407, blocking 95% of it’s light and Kenworthy et al. postulate that the series of dips was produced by ring-like structure. The light curve is shown in figure 1.

Kenworthy et al. used the following model to describe the system: two objects, one without rings (star A) gets transited and one with rings (object B) does the transiting. I’m being deliberately ambiguous about the nature of object B, the reason for this will become clear. The authors used the information in the light curve (the change in brightness of star A over time) to build up a picture of the disc.

As object B’s disc passed in front of star A it blocked some of its light. Not all of it though—the disc is translucent and some light got through. The amount of light that managed to pass through the disc is called the ‘transmission’. Different rings in the disc have different densities and therefore different transmission levels. Kenworthy et al. used this fact to model the rings using the light curve.

The most important information was encoded in the slopes within the light curve, i.e. the rate of change in stellar brightness. Think about a disc of semi-opaque material, slowly advancing across the face of star A. The amount of light you’d see coming from A will slowly decrease as the disc advances, until it is entirely eclipsed, so a downwards slope will appear in the light curve. Any change in slope marks a moment when a ring edge either begins or finishes its transit of star A (the star gets dimmer faster if a more opaque section of the disc moves in front of it). So the number of changes in slope corresponds to the number of rings!

Figure 2. A Diagram of one of the ring models. The redness of the rings corresponds to transmission. The green line shows the path of star A as it is transited by the rings.

Figure 2. A Diagram of one of the ring models. The redness of the rings corresponds to transmission; more light passes through redder rings. The green line shows the path of star A as it is transited by the rings.

Unfortunately, since SuperWASP is a ground-based telescope it can’t observe the stars during daylight (only space telescopes like Kepler get that luxury). As I mentioned before, the total transit event lasted 56 days so, obviously there are some gaps in the data. Around 56 of them. This means that Kenworthy et al. can’t count every ring, they can just put a lower limit on the number. They count at least 24 ring edges but suggest that the real number is probably higher.

The authors don’t find a unique solution to their model (several ring configurations are consistent with the data). However, they find that gaps between the rings (i.e. actual empty space between the regions of different transmission levels) are supported by the data. One of the ring models that was fitted to the data is shown in figures 1 and 2.

What about the mysterious ‘object B’? Well, since only one transit was observed, the authors can’t be sure that B is actually orbiting A: it could just be wandering through space and happened to pass in front of A. Assuming, however that B does orbit A, Kenworthy et al. place an upper limit on its mass: 24 times the mass of Jupiter. This upper limit is the mass of a brown dwarf, something in-between a planet and a star.

The authors offer a explanation for the creation of the gaps between the rings: they were “sculpted by exomoons”. This is exactly what happens in the rings of Saturn. Gaps in the ring structure are produced by moons which carve out a path via gravitational interaction with the disc material. They are called “shepherd moons”, because they ‘herd’ the disc material.

Although this paper provides tantalising evidence for the existence of exomoons, until we detect one directly we won’t be able to confirm their presence in any Solar-system other than our own. Still, the idea behind this paper is pretty cool, even if the conclusion is a bit of a stretch. The hunt for exomoons continues….

2016 Budget: Great Policy Document and Much Better Budget Plan by The Planetary Society

Van Kane gives a summary of the 53-page proposed Fiscal Year 2016 NASA Planetary Science budget.

February 05, 2015

More obligatory author shilling by Charlie Stross

Charlie here, popping in with an announcement about next week.

Some of you may be familiar with Pandemonium Books and Games in Cambridge, Mass., a most excellent establishment. Some of you might also be familiar with Boskone, the regular mid-February Boston SF convention. A bunch of authors go to Boskone, and we also do events at Pandemonium, so mark your calendars:

UPDATED Thursday February 12th, 7pm: Elizabeth Bear and Scott Lynch, now with special guest Charlie Stross, will be signing books, reading, and generally entertaining you at Pandemonium! And yes, this is the official launch party for "Karen Memory":

(I'll probably move on to the Cambridge Brewing Company after the event, assuming it's open; feel free to tag along.)

You can also catch all of us at Boskone, over the weekend!

Soup Safari #16: Chicken Noodle at the Soul Cafe & Bar. by Feeling Listless

Lunch. £3.95. Soul cafe & bar, 114 Bold Street, Liverpool, Merseyside L1 4HY. Phone:0151 708 9470.  Website.

Subscriptions (feed of everything)

Updated using Planet on 1 March 2015, 06:48 AM