Francis’s news feed

This combines together on one page various news websites and diaries which I like to read.

September 04, 2015

Badger us for tech advice at our new Open Sauce Lunches by Aptivate

Are you an NGO struggling with questions or problems about using technology?  Join us for lunch and take advantage of our expert team to help you.  For free!


Evaluating Digital Citizen Engagement - a practical workshop for understanding and improving impact by Aptivate

A participatory and practical one-day workshop will look at the effective evaluation of such Digital Citizen Engagement (DCE) initiatives


The Refugee Crisis: We All Need to Act by Albert Wenger

I have not been blogging much in the last couple of weeks as I have been working on my book. But I have been reading coverage of the refugee crisis as well as some great opinion pieces by Zeynep Tufekci (I highly recommend following her on Twitter). There have been many heart wrenching images and stories that likely only scratch the surface of the human suffering and tragedy that is unfolding.

While there will be a time to discuss long term solutions addressed at the origins we have to act now to relief the immediate suffering. One thing to do is to donate and here is one list of organizations that help. Susan and I have chosen to support Refugees Welcome, which is an initiative out of Berlin that let’s individuals volunteer to host refugees. This is exactly the kind of bottom up initiative that we believe in and we hope that large housing/apartment sharing sites such as AirBnB and others will wind up supporting this or other efforts like it.

We also need to reach out to our elected representatives and let them know that our countries cannot stand by and let this happen on the theory that it is someone else’s problem or, worse yet, that maybe fewer people will try to escape their circumstances. No matter what our individual or collective fears about immigrants might be, nothing can justify the suffering. We all bear responsibility and we all need to act.


A Magnetized Universe: How galaxies are influenced by magnetic fields by Astrobites

Title: Effects of simulated cosmological magnetic fields on the galaxy population
Authors: Federico Marinacci and Mark Vogelsberger
First Author’s Institution: Kavli Inst. for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA
Status: Submitted to MNRAS

Magnetic fields are one of the most challenging astrophysical phenomena to both observe and simulate. Although we see signatures that magnetic fields are everywhere in the Universe, they are often very weak and challenging to characterize in detail. For example, the Milky Way is host to a magnetic field on the order of 10-6 G, nearly a million times smaller than the Earth’s magnetic field. We know magnetic fields exist on galaxy scales and larger in the Universe, but we don’t know how they got there, and we don’t completely understand their role in how our Universe has evolved.

In order to help constrain current observations on the strengths of magnetic fields in the early Universe, and possibly develop new signatures of what the original “seed” magnetic fields were like, the authors of today’s astrobite use cosmological hydrodynamics simulations (or magnetohydrodynamics, MHD) to study how various magnetic field strengths in the early universe influence galaxy properties today. Simulating the effects of magnetic fields accurately is very challenging, and usually involves making many approximations. For this reason, many simulations do not include their effects. However, if magnetic fields have a dramatic effect on galaxy evolution, it may very well be important to understand them in detail in order to understand how galaxies actually form in our Universe. In addition, their effect on galaxy evolution, as explored through simulations, may provide strong predictions and unique observational signatures that can tell us more about the origin of large scale (galaxy-wide and even larger) magnetic fields in our Universe.

Weak Fields, Big Effects

Although the magnetic fields that permeate the Universe are very weak compared to our every day experience, they can have dramatic effects. Magnetic fields act to supply pressure to whatever gas or plasma they reside in; that is pressure in addition to the thermal pressure of the gas. Just as gas pressure, or thermal pressure, is generated from the kinetic energy of molecules within the gas, magnetic pressure comes from the energy contained within the magnetic field that exists within the gas. Our current observational and theoretical understanding is that magnetic fields were seeded in the early Universe, by some uncertain mechanism, that had strengths on the order of 10-9 G (and maybe even smaller), a factor of 1000 less than those currently observed in galaxies. As galaxies formed and the gas within them was compressed from an initially more diffuse state, the magnetic field strength grew over time. Depending on its initial strength, the extra pressure from magnetic fields during this process could have dramatic effects on how many galaxies form, the distribution of galaxies, and how stars form within galaxies. Using analytical arguments, the authors show that seed magnetic field strengths around the level expected (10-8 – 10-9 G) contribute enough additional pressure to the gas to play a major role in galaxy evolution.

Magnetic Fields in a Simulated Universe

Using several cosmological simulations run with the AREPO code, the authors tested how various initial magnetic field strengths affected galaxy evolution and star formation over time. They used initial field strengths in the 10-8 – 10-9 G range discussed above, and both above and below this range. The top left panel of Figure 1 shows the evolution of the root mean square magnetic field in each of their simulations (the initial magnetic field strength is given in the legend next to the line colors), from the weakest (black) to the strongest (purple). In each of these panels, time is denoted by redshift, z, which reads from right to left (z = 0 is present day). The top left and top right plots only show redshifts z = 10 to z = 0, while the simulations begin at z = 127; this is why the magnetic field strengths on the right side of the graph are not the initial values. The remaining plots in this figure constitute the main results of this work. While there is a lot going on, lets take some time to digest it.

Figure 2:

Figure 1: The effects of various initial seed magnetic field strengths on various galaxy properties for all galaxies within the simulation. The top left plot shows how the root mean square mangetic field evolves in the simulation over redshift (the plot reads right to left, with z = 0 being present day, and z = 10 early universe) for each of the initial field strengths, given by the various colored lines. Pink is the strongest, black the weakest. The remaining plots show how the magnetic field changes how quickly stars form (top right) described as the star formation rate density, the galaxy number density as a function of galaxy total stellar mass (bottom left), and finally the total stellar mass of every dark matter halo as a function of the dark matter halo mass. The gray lines show either observations (top right, bottom left) or predictions (bottom right). (Source: Figure 2 of Marinacci & Vogelsberger 2015)

In the top right of Figure 1, the authors show the star formation rate density (SFRD) in terms of solar masses per year in the entire simulation over time. This is compared to a collection of observations of our Universe (gray). As shown, stronger initial magnetic fields decrease the SFRD (less stars overall) at all times, with magnetic fields above 10-9 G producing the largest effect. This effect extends also to galaxies (lower left plot). This figure shows the number density of galaxies as a function of the total stellar mass of those galaxies. That is a rough sentence to swallow, but this plot can be thought of as a histogram of the number of galaxies in the simulation that contain a certain total mass of stars. The lower the line, the fewer galaxies there are at a certain mass. This decline is especially true for lower mass galaxies, while the effect is smaller for larger galaxies. The lower right plot shows the ratio of the galaxy stellar mass to total mass (stars + gas + dark matter) as a function of the total mass of the dark matter halo in which they reside. In each case, a stronger magnetic field means that a given dark matter halo contains less and less stars. As in the lower left plot, this effect is much more dramatic for the lower mass galaxies. All of these effects indicate that a stronger magnetic field provides enough additional gas pressure to prevent gas from cooling and collapsing, a necessary process to form both stars and galaxies, hampering the growth of galaxies.

Informing Future Observations

Today’s astrobite shows that magnetic fields can have a dramatic effect on galaxy evolution. This is important, as it can eventually help constrain what magnetic fields in the early Universe looked like, giving astronomers another tool to better understand a phenomena that is very challenging to observe.


Chang'e 5 test vehicle maps future sample return site by The Planetary Society

This summer the Chinese space agency has been making progress toward its planned 2017 launch of the Chang'e 5 robotic sample return mission, performing low-altitude imaging of the future landing site.


Mars Exploration Rovers Update: Opportunity Digs Marathon Valley Walkabout by The Planetary Society

Opportunity drove farther into Marathon Valley in August, dug into what appears to be a water-altered rock, and took a lot of picture postcards in what is turning out to be a distinctively different site from any that the mission has found since the robot field geologist landed in 2004.


September 03, 2015

Catastrophic. by Feeling Listless

Art Yesterday, I finally managed to see the Whitworth gallery in Manchester for the first time since it reopened following a refurbishment and building of an new extension, all of which has been awarded Building of the Year by RIBA. Having rather loved the old place I was concerned that all its 60s wood panel and stone and modernist intentions and nooks and crannies had been swept away but I'm pleased to see that the architects at MUMA have simply built upon, what the introductory booklet rightly describes as "Scandinavian-style spaces" by Bickerdale, Allen and Partners.  The new large atrium cafe sits within and overlooks on ancient trees and opens out into Whitworth Park in a way which feels both urban and pastoral.  As I sat eating a cheese bap, I felt that any time I wasn't looking out of the window was being wasted, especially on a day with such changeable weather.

Unfortunately, the display of the Whitworth's other key draw, its permanent collection, which is arguably of national, if not international importance is nothing short of catastrophic.  The notion, as tends to be the vogue for smaller galleries, is to present the works in a series of shorter exhibitions around themes, rather than simply have everything on show, all of the time.  This is not something I necessarily disagree with, since it allows for a wider selection of the work to find wall space and also has the added benefit of generating repeat visits from an audience who might otherwise stay away having assumed that they'd already seen what was there.  At the moment the themes are acquisitions from the 1960s, Watercolours, Green textiles, New Acquisitions and Portraits.

Except the way they're displayed is dismal and abysmal.  In all cases, the policy seems to be to get as much out there as possible and so the works have often been splattered across the walls without any apparent notion of how they'll be viewed by the public, floor to ceiling with centimetres between the frames not unlike a commercial gallery and with what seems like an eye to how they work aesthetically in an interior design sense rather than how they relate to one another.  Whilst this isn't unusual in galleries, it tends to be with larger canvases - see the main atrium at Birmingham Art Gallery or many rooms at the Walker.  But with smaller works, these are watercolours, prints and even oils, it's impossible to focus on one image over another, the eye darting from one to the next, constantly intensely distracting from each other (not helped, however understandable it is as a conservation requirement, everything being glazed leading to obscuring bulb flair).

To make the job of enjoying the work even more impossible, because of the proximity of the frames to one another, there isn't any room for labels and so the gallery has instead provided booklets containing wall maps and silhouetted boxes as a key through to titles and their artists which works rather like an Argos catalogue which means the visitor spends half their time in the space with their head down trying to match the painting they're looking at with the details and I did hear visitors in pairs and groups not discussing what they thought of a painting but if it was the one that was in the booklet.  Oh and outside of the New Acquisitions barely anything has supporting material so in some cases its impossible to really appreciate a work and how its important within its own context.

I tried.  I tried.  In the Watercolour section there's a section filled with Turners, but they're piled into two columns next to each other reaching floor to ceiling and none of them at typical eye height.  The Portraits exhibition is the epitome of how damaging these choices are as Hogarth prints are thrown in with pre-Raphaelites paintings and tapestries, Freud paintings, photographs and video art but they're just sort of there, assuming your eyes can actually see them, assuming you're not having to crouch or which you had a step ladder to see things properly.  To be fair, all of this is also true of the Royal Academy Summer Exhibition, which also has higgledy-piggledy staging and booklets, but it's also true that's a light airy space whereas the Whitworth has necessarily darkly painted walls and subdued lighting.  Eventually I walked, unable to cope with the confusion of images

Surrounding all of this too, I could see was empty wall space, or large dramatic walls with just one or three measly paintings or prints on the bottom so this isn't also just about maximising the potential.  I've been to many regional art galleries with a similar (if not necessarily as important) collections stuck on a landing or in a stairwell but that is usually because they don't have the budget or the room.  The Whitworth seems to have plenty of both.  Try and replace the display format if you want, but just as the page upon turnable page antics of the printed book have survived for hundreds of years because it works, the model of artwork at shoulder height with an information label does too because it allows the viewer to think, the ponder and to concentrate on a single image at a time.  Much is gained from sacrificing quantity for lucidity.

Elsewhere in the gallery at the moment, there's a really excellent exhibition of Chinese art from the 1970s which follows the classical display model with labels next to the paintings and free booklets containing pages of contextual information.  At the centre of the space is a really poignant Ai Weiwei installation, Still Life, in which the artist presents hundreds of Stone Age axe heads and various other ancient carved paraphernalia with the context deliberately removed, the reinvention as an art installation "an iconoclastic gesture designed to offset the value and importance of these ancient objects".  As I reflect on this, this is pretty close (albeit with more facile implications) to what the Whitworth's done to its permanent collection in these initial presentations.  Luckily it is just the initial presentations which will change and hopefully with a less chaotic, more thoughtful approach.

I hope that they also won't bury the headline as much either.  Within what I think is another part of the extension is a study area and it's here that some of the real jewels of the collection are displayed, largely unheralded and well away from the main spaces, so easily overlooked.  A Rembrandt drawing.  Two Lowrys.  One of those Constable cloud sketches.  Another Turner.  I think I saw a Pisarro.  I definitely saw an Ian Hughes and a Stanley Spencer.  Why would you bury this stuff in what feels like the basement (albeit with a ground level entrance)?  Oh and again mostly without accompanying information?  Yes, at least I've seen them and it was nice to enjoy the experience in something akin to corporate office space rather than a "white cube", but it's listed as "Collections display" on the map which really does obscure what's here.  I could have missed it.

Well that was a rant but I think it's rare that I've been this disappointed with this kind of experience.  I'm usually pretty satisfied if there are some nice paintings to look at and an adequate toilet.  There are several toilets and they're more than adequate but like I said the building deserves the awards it's received.  Along with the Chinese art exhibitions, there's also a spectacular Cornelia Parker showing of a new artwork which consists of a massive embroidered reproduction of the Wikipedia entry about the Magna Carta stitched together by two hundred people often with a life connection to the words.  So the temporary displays are varied and rather special.  But I just feel, and I appreciate this is really a disagreement about a curatorial decision or policy, that the permanent collection should be as well served.  Rant over.


Kayak Dive Donegal by Goatchurch

The past five days have been kayak diving and camping in Donegal. Now we’re in Ballycastle for four more days of expensive boat diving where the weather has turned bad and probably won’t be so good underwater, but at least we’ve got a roof over our heads.

Camping in Ireland seems pretty easy. The two places we’ve stayed at had “day rooms” for people with tents where you can do your cooking, make toast, sit down and drink tea.

First two days were at Downings, diving in the almost totally enclosed Broadwater Bay at Massmount, and then at the totally exposed Melmore Head once we’d got our confidence back.

Then it was south to pitch tent at Derrylahan hostel before spending the day out at Malin Beg having to seal launch the kayaks off the slip at very low tide and poking our noses along the Slieve League cliffs for a couple of kilometres to check if paddling its full length was going to be a silly plan. The local fishermen thought it was an okay idea and said there weren’t any currents. With a northerly wind blowing we had perfect shelter.

Here’s a short video of a dive into a shoal of mackerel at the mouth of a huge cave. I noticed them only because they broke the surface as the streamed past my canoe and it looked like rain was falling onto the water even though the cavern ceiling was dry.

The next dive onwards was on the east side at exactly the tip of Carrigan Head, completely sheltered from the wind, waves and current by a 2 metre high headland of rock, but where the sounder registered a sudden dropoff to 20metres. The place was scoured of kelp revealing a low animal turf and dozens of large wrasse fish parked in the slot doing nothing in particular. As usual, Becka did the cycle back along the road to fetch the car while I packed the gear and walked up the road to the hostel to have a cup of tea.

On our fifth day in Ireland we got air fills at Dive Donegal before using it all up on two long short dives from St Johns Point. If anyone is counting, the one on the left out from Portnagh Rock should be in the top ten shore dives of the world for its perfectly designed architecture of satisfaction. You use up your air at the perfect rate at the perfect depth and everything is easy to find. The karst rock of the main reef has eroded into shelves that are like a condominium hotel for critters (one alcove contained a fat lobster chewing on the hide of a dead dogfish). It’s worth the drive, even if we couldn’t find a decent breakfast anywhere nearby to fill us up in the morning.

All this kayaking is exhausting and makes me not interested in spending many hours at the computer. I’ve got all winter to do this when I get home and settled down.


Starspots to Measure Small Differential Rotation Rates by Astrobites

The Sun rotates in 25 days. The star GJ 1243 the authors are studying rotates in 14 hours. If the Sun revolves like the restaurant on the Space Needle, GJ 1243 revolves at the speed of a lighthouse. Compared to the Sun, GJ 1243 rotates fast.

The authors of today’s paper are tracking starspots. Starspots are the sunspots of other stars, i.e. magnetically active areas on their surface. The magnetic activity of starspots prevents the warm inner regions of a star to rise up to the surface. This makes starspots colder than the surrounding areas, making them appear black (see Figure 1). Therefore, starspots make the brightness of a star fluctuate, as they move across the star. These fluctuations can be detected, and can be used to study starspots.

Figure 1: Sunspots, with the approximate size of Earth shown for reference. Image credit NASA/SDO.

Figure 1: Sunspots, with the approximate size of Earth shown for reference. Image credit NASA/SDO.

Unlike the Space Needle and lighthouses, stars are fluid and tend to rotate differentially: the equator rotates faster than the poles, like a smoothie, prepared in a blender—the type with the blades in the middle. Differential rotation causes starspot features to shear and change with time. This shearing and evolution of starspots can be used to measure the rate of stellar differential rotation. We know that differential rotation plays an important role in stellar magnetic dynamos, i.e. in how stars generate magnetic fields. However, the link between differential rotation and the generation of magnetic fields in low-mass fully-convective stars, remains unclear.

Using Kepler to track starspots around a cool star

The authors tracked the starspots on the active M dwarf GJ 1243, using photometric data from the Kepler spacecraft. In today’s paper, the authors present the smallest differential rotation rate that has ever been robustly measured for a cool star. Their result provides important constraints for dynamo models of cool low-mass stars.

The authors present evidence for starspot features on GJ 1243, illustrated in Figure 2. The primary feature (orange area) is a very long-lived starspot that is constant in phase. The secondary feature (purple spot) evolves on timescales of hundreds of days, both in phase (moves in longitude) and in amplitude. The authors can not determine the exact geometries of these features, which could in fact be two large spot groups (e.g. see Figure 1), but for simplicity they are denoted as single large spots in Figure 2.

Figure 2: A schematic illustration of the the primary (orange), and secondary (purple) star spot features at two points in time. The primary feature is long-lived and evolves slowly, while the secondary feature evolves in both phase (moves in longitude), and size. Each feature could be a single large spot, or a large group of smaller spots. Upper part of Figure 4 from the paper.

Figure 2: A schematic illustration of the the primary (orange), and secondary (purple) star spot features at two points in time. The primary feature is long-lived and evolves slowly, while the secondary feature evolves in both phase (moves in longitude), and size. Each feature could be a single large spot, or a large group of smaller spots. Upper part of Figure 4 from the paper.

A phase diagram to find starspots

Figure 3 shows the main diagram the authors use to find starspot features using the Kepler data. The figure shows a grid of phase as a function of time for the 4 year data of flux from GJ 1243. A change in phase is here equivalent to a change in latitude of a starspot feature, in the direction of the star’s rotation. Each pixel in the grid spans 0.04 in phase (or 14.4 deg in longitude), and 10 days in time. The shading denotes the flux: the darkest regions correspond to a flux 1.5% below the median value, while the lightest pixels 1.5% above the median value. The phase information is repeated twice (showing phase from -0.5 to 1.5) for visual clarity—for example, the dark band at phase 0 is the same as the dark band at phase 1.

Figure 2: The diagram to find starspots: a continuous phase map of the 4 year Kepler dataset for GJ 1243. Pixel shade, from dark to light, indicates the median flux in each (time, phase) bin. White gaps show times with no data. Figure 3 from the paper.

Figure 3: The diagram to find starspots: a continuous phase map of the 4 year Kepler dataset for GJ 1243. Pixel shade, from dark to light, indicates the median flux in each (time, phase) bin. White gaps show times with no data. The data is folded twice in phase for visual clarity. Figure 3 from the paper.

Using this diagram the authors find two starspot features. Figure 4 shows the same diagram as above, with their best-fit starspot modelassuming two starspot featuresoverlaid. The dark band centered at phase 0 (overlaid with orange points) is due to the primary spot. This feature does not significantly change in phase and flux. Conversely, the secondary feature (overlaid with purple points), continuously changes in both phase and flux amplitude. Moreover, this feature appears at least three times in the dataset on the opposite hemisphere (phase 0.5) of the primary feature, and then moves in longitude towards the primary. Interestingly, these starspot features are very large compared to typical starspots on the Sun (<1° to about 20°), appearing to span about 50° to 90° in longitude.

The authors interpret the slow, linear phase evolution of the secondary starspot to be the signature of differential evolution. Using the best fitted lines shown in Figure 4 below, the authors determine a differential rotation rate of 0.012 radians/day for GJ 1243. This is small compared to Sun’s differential rotation of 0.055 radians/day, making GJ 1243 rotate largely as a solid body. The authors note that this result is in agreement with previous theoretical work in the area, which predicts that differential rotation decreases with increasing stellar rotation rates and decreasing effective stellar temperatures.

Figure 4: The same diagram as in Figure 2 with overlaid models. The primary feature (overlaid with open orange points) is nearly constant in phase. The secondary feature (overlaid with purple points) evolves significantly. The linear trend of the secondary feature, fit with black lines, indicates differential rotation. Figure 5 from the paper.

Figure 4: The same diagram as in Figure 3, with overlaid starspot models. The primary feature (overlaid with open orange points) is nearly constant in phase. The secondary feature (overlaid with purple points) evolves significantly. The linear trend of the secondary feature, fit with black lines, indicates differential rotation. Figure 5 from the paper.

Towards a better observational understanding of differential rotation

Modeling starspots on other stars is challenging. The technique described in this paper is best suited for studying long-lived spots on rapidly rotating stars. The results presented in this paper have placed important constraints on the magnetic dynamos of cool low-mass stars. With further study, and extending this technique to study hundreds of other rapidly rotating active stars, we can start to gather a better observational understanding of surface differential rotation across the main sequence.


CubeSats to the Moon by The Planetary Society

Casey interviews Dr. Craig Hardgrove about his lunar CubeSat, how it came together, and how NASA’s support for small missions are important for early career scientists like himself.


September 02, 2015

Extracting the BBC Genome: Lost and Found. by Feeling Listless



Film Back in 1994 and 1998, BBC Two offered a strand called "Lost and Found", seasons of films described as being "rarely (or never before) seen on television, or presented in restored versions" something I was entirely unaware of until the VHS Video Vault YouTube channel, part of the VHistory blog, uploaded an introduction to Michael Mann's The Keep (see below).

The BBC Genome predictably has a list of all the films in this strand. Just as a test, I'm also going to see how available these ancient scheduling choice are now.  At a time when the notion is that everything is supposed to be just there, sometimes its good to see if that's actually true, especially with a list of film developed before the introduction of dvd.

1994 Season

Becky Sharp

(Rouben Mamoulian, 1935)

No official UK release but multiple other editions available from abroad.  But buyer beware.  The reviews are the same on all the editions listed on Amazon.  This page says the cheapest has atrocious sound.  There's an pleasantly poor print at the Internet Archive which says the film's in the public domain - which might account for what it's all over Google's video service as well.

A Star Is Born 
(George Cukor, 1954)

I actually have a copy of the Warner Home Video BD which is utterly gorgeous and seems to be the version suggested in the listings, with scenes restored from surviving footage (mainly the characters in long shot travelling and getting in and out of cars) and set photographs with an audio track underneath which looks for all the world like a Doctor Who recon.

The Ghost Ship
(Mark Robson, 1943)

Region one double pack with Leopard Man or region two double pack with The Seventh Victim.



The Keep
(Michael Mann, 1983)

Still no dvd release.  Only available on a deleted VHS.  Given that, it's quite extraordinarily on Netflix UK at the moment.  Best watch it then.

Caged Heat
(Jonathan Demme, 1974)

Multiple editions, with varying degrees of exploitation covers.

Before the Revolution
(Bernardo Bertolucci, 1964)

UK BFI release.

Pursued
(Raoul Walsh, 1947)

Region free dvd release.

Tokyo Drifter
(Seijun Suzuki, 1966)

UK dvd release from Yume Pictures at a budget price.

Confidential Report
(Orson Welles, 1955)

UK release as Mr Arkadin.  There's also a Criterion release which has three different versions.

I Only Want You to Love Me
(Rainer Werner Fassbinder, 1976)

UK release of a restored print from Park Circus.

It Happened Here
(Kevin Brownlow and Andrew Mollo, 1963)

UK release.

Suddenly
(Lewis Allen, 1954)

UK multiple releases which suggests its out of copyright.

Le Samourai
(Jean-Pierre Melville, 1967)

Criterion release (R1).  French BD.

1998 Season

Waterfront
(Michael Anderson, 1950)

UK BD release from Network.  Well done chaps.

The Man from Morocco
(Max Greene, 1944)

UK DVD release from Network.

Car of Dreams
(Graham Cutts & Austin Melford, 1935)

UK DVD release as part of a John Mills boxed set by ITV Studios.

Owd Bob
(Robert Stevenson, 1938)

UK DVD release from Odeon Entertainment.

Street Song
(Bernard Vorhaus, 1935)

No dvd release.  Not at the Internet Archive or on YouTube.  The list has finally defeated us.

Nevertheless for whatever reason most of these films are available in some form or other and loads in the UK thanks in large part to Network.  Congratulations to the modern world.


Magic systems and my world building process by Charlie Stross

Hi everyone, this is Aliette de Bodard peeking in from Paris. Charlie's been kind enough to let me borrow a spot on his blog while he recovers from jet lag (we both went to Worldcon in Spokane, but I have a big advantage over him: I wasn't in the US long enough and actually never really adapted to the 9-hour time difference, so when I came back I was basically functioning normally. On the minus side, I was a pumpkin in Spokane!). Anyway... *clears throat* Today, I wanted to talk about magic systems and how I built the one in my novel.

Magic systems, for me, are a bit like the air you breathe: I've found out (much to my dismay) that I can't start writing a story without having an idea of where the magic is coming from and who uses it. Magic conditions so much of the fabric of a fantasy universe for me that not working it out in advance feels a little like setting out across a blizzard without skis, warm clothes or a distress flare.

When it comes to magic systems, there is (of course) an entire spectrum between magic as the numinous, the fundamentally irrational and illogical (JRR Tolkien's Lord of the Rings, the magic of the sea in Patricia McKillip's The Changeling Sea), and magic as a quasi-rational system (Brandon Sanderson's Cosmere). The former, again, has a range between magic permeating the entire world (for instance, Elizabeth Bear's The Eternal Sky, where the celestial bodies hanging in the sky depend on which country one is in), and magic as an incursion, a break in the ordered surface of the world through which the numinous and outright scary can intrude (the eerie, quiet otherworld of Kari Sperring's The Grass King's Concubine is effectively contrasted with a gritty and very real Industrial Revolution). The latter, in turn, is what Brent Weeks characterises as an attempt to make magic closer to science, to prevent Deus Ex Machina endings aka getting characters out of any scrape: it sets very clear limits on what magic can and cannot do.

This, of course, raises the issue of differentiating magic and science, notably when you happen to have both in the story. Very often, magic is the province of a select few: not always a hereditary system (though Adrian Tchaikovsky's Guns of the Dawn, for instance, has two magical ruling dynasties), but certainly one of chosen people, those born with magical abilities (Robert Jordan's Wheel of Time differentiates those who can be taught and those with the spark, who will express that talent without being taught, but it remains that you're either capable of magic or not). Science, in turn, is meant to be more "democratic", in that anyone can, for instance, operate a car, or be trained to be an engineer.

But is it? Affinity for science is also the province of a select few (you tend to be either good at, say, maths, or not); and, while you might not have cars in most fantasy books, they are replete with magical or quasi-magical artefacts which can be used regardless of whether you're a magic user or not.

A last major question is whether to have different magic systems: again, they can run the gamut between a unifying principle to various and completely disparate sources of magic. Two common medium points are different flavours of magic as the expression of the same underlying source (the various flavors of spree-magic in Brandon Sanderson's The Way of Kings), and different flavours of magic tied to different races such as humans and faeries (Kate Elliott's Cold Magic and sequels in the Spiritwalker trilogy). If there are different flavours of magic, they can be complementary (Fitz in Robin Hobb's Farseer trilogy can use both the Wit and the Skill), or completely antithetical (wizardry vs artifice in Juliet McKenna's Tales of Einarinn: a wizard cannot become an artificer and vice versa).

I was very aware of those things when I developed the magic systems for my novel, The House of Shattered Wings. In particular, I wanted to tackle issues of power and abuses of power without leaning in too much on narratives of exceptionalism and chosen ones--which is a tricky maze to navigate!

I ended up settling on a two-tier magic system. In the novel, Fallen (angels) are amnesiac magic users, with an affinity for magic which means that they're natural magicians (much like humans are natural at standing up on two legs). The twist is that Fallen are also walking magical sources: they can pass on their magic to anyone with a breath or a touch; or it can also be harvested from their body parts. The opening scenes of the novel have people hacking off fingers from a newly Fallen to sell them on the black market, and someone else stealing off Fallen bones to distil them into angel essence, a potent drug that grants enormous magical power (and rots the lungs as a side-effect, because what good is magic if there's no price?). So you can be a Fallen and be a "natural" at magic, or you can be anyone and have access to borrowed power (and some human magicians are pretty freaking powerful, because they're more desperate and driven than Fallen).

I also made this magic an integral part of the world rather than an incursion: the novel is set in an alternate, turn of the century Paris devastated by a magical war. In this dystopian universe, magic becomes a precious resource and a contributing factor to the safety of refuges. The main structure of the society is Houses, magical factions (generally headed by Fallen but not always) which offer safety and power in the ruins. I made the choice to leave it partly numinous, but with clear limits such as the absence of instantaneous healing spells, in order to evoke sense of wonder without coming off as though I were cheating. The clearest limit I actually gave on magic was the equilibrium of terror (a la Cold War): factions limits themselves on the use of magic because overuse of it already triggered a first, devastating war that no one really wants a repeat of.

The other thing I did with the book--as you might have guessed!--was running parallel magic systems. I wanted to avoid folding everything into the same overarching principle, in part because that had always felt too neat to me, and also more than slightly problematic when said magic was derived from Christianity. I set up my second magic system as a deliberate counterpart to the first. Instead of Falling down to earth and being granted magical powers, essentially by amnesia, my second system involves tiên (Vietnamese Immortals, based on Daoism).

It is not by birth, but by choice: anyone with the willpower to do the required initiation can gain access to it, and ascension to Immortals is essentially based on knowledge. Rather than be the province of ageless, deathless beings from Heaven, it raises ordinary humans to Heaven; rather than be given in scraps to other people, it can only ever be achieved for one's own self. I use this one as a contrast and counterpoint, but also as a source of similarity: as one character points out in the book, once ordinary people have attained immortality, they're as prone to arrogance and lack of compassion as powerful Fallen. I guess one of my not-so-secret themes for the book was how power changes and sometimes corrupts (a venerable tradition in fantasy, aka the Lord of the Rings approach).

(I also mention a lot of other magic systems that essentially got erased by Fallen domination, but don't go into detail on them).
So that's my magic system, and how I got there. What are your favourite magic systems, and what do they entail?


Introducing guest blogger: Aliette de Bodard by Charlie Stross

(Charlie here. The season of guest bloggers continues with Aliette de Bodard, an incredibly talented writer, who I unaccountably forgot to introduce at the same time as Fran Wilde.)

Aliette de Bodard lives and works in Paris, where she has a day job as a System Engineer. She studied Computer Science and Applied Mathematics, but moonlights as a writer of speculative fiction. She is the author of the critically acclaimed Obsidian and Blood trilogy of Aztec noir fantasies, as well as numerous short stories, which garnered her two Nebula Awards, a Locus Award and a British Science Fiction Association Award. Recent/forthcoming works include The House of Shattered Wings (August), a novel set in a turn-of-the-century Paris devastated by a magical war, and The Citadel of Weeping Pearls(October), a novella set in the same universe as her Vietnamese space opera On a Red Station Drifting. She lives in Paris with her family, in a flat with more computers than warm bodies, and a set of Lovecraftian tentacled plants intent on taking over the place.


A look into Docker concepts for application deployment by Aptivate

Docker is a tool to help in the deployment of applications across host systems. Virtualization, union file systems, image registries, orchestration services


Ten-day Taxi Trip to International Space Station Underway by The Planetary Society

A ten-day International Space Station taxi flight is underway following the Wednesday liftoff of a three-person crew from Kazahkstan.


New Horizons extended mission target selected by The Planetary Society

The New Horizons mission has formally selected its next target after Pluto: a tiny, dim, frozen world currently named 2014 MU69. The spacecraft will perform a series of four rocket firings in October and November to angle its trajectory to pass close by 2014 MU69 in early January 2019. In so doing, New Horizons will become the first flyby craft to pass by a target that was not discovered before the spacecraft launched.


Populating the OSIRIS-REx Science Deck by The Planetary Society

The assembly of the OSIRIS-REx spacecraft continues, with many elements integrated onto the spacecraft ahead of schedule. Last month both OTES and OVIRS were delivered to Lockheed Martin and installed on the science deck.


September 01, 2015

The Underwater Menace DVD. Now available to pre-order? Again? by Feeling Listless

TV George Bush famously said, "There's an old saying in Tennessee — I know it's in Texas, probably in Tennessee — that says, fool me once, shame on — shame on you. Fool me — you can't get fooled again."

Well, here we are again, another page indicating a pre-order of Doctor Who's The Underwater Menace on Amazon.



This has one more thing in it's favour than last time. A tweet from the club journal:


We don't know how they're getting around episodes one and four not having moving images available. Probably recons though expect the omnirumour crowd to be out in force.

The BBFC hasn't classified anything new yet.

But yes, on this occasion I want to believe.  In the recons version at least.

Incidentally, I've decided to try something and put my "associates" thingy in the above Amazon URL to see what happens.  After about ten years of being registered, I'm just £15 away from my first gift voucher.


My Favourite Film of 1983. by Feeling Listless



Film The first copy of the The Big Chill I owned was a first wave VHS rental release bought at a car boot sale. The inlay was appalling.  Badly trimmed out image of the ensemble slapped onto a cyan background, it featured a synopsis which looked like it had been dictated down an especially bad telephone line to someone who wasn't a film professional and thought an actor called "Jess Coldblulm" existed.  But the tape itself was perfectly fine and ended up being watched about once a month for a few years, sometimes in a double bill with 1992's Grand Canyon (also an ex-rental) for no reason other than that they were both directed by Lawrence Kasdan and starred Kevin Kline.

You can probably imagine what happened when the younger version of me discovered that he could buy films for a pound each at car boot sales.  Every Sunday morning dragged to either the multi-story car park in St Helens, or the cricket club on Aigburth Road or a massive wasteland in Bootle, handed some pocket money and sent off to forage, returning on most occasions with ten or fifteen tapes.  In the years before Netflix and Lovefilm, this was my Netflix and Lovefilm, across genres and periods, selections made based on how quickly I could scoop the films up before someone else grabbed them and whether I recognised an actor on the cover or (later) the name of a director.

Disney films would be at least £5 for reason I've later realised were to do with some tapes only being available for short periods of time in shops (towards the end of watching my way through the lot, they recently re-released everything on dvd on mass and now they're available in Tescos).  Same Star Wars before they were released on dvd.  It was always obvious which seller was simply having a clear out and who knew exactly the worth of what they had.  Also there were pitious numbers who when you asked how much their tapes were would say the gut wrenching words, "Oh um, they're all different prices ..."  But I don't have time for this!

Eventually I migrated from car boots to "legitimate" media sales chains and charity shops (and I replaced my VHS of The Big Chill with a DVD bought at the old Virgin Megastores XS (which has since been replaced with Zavvi, a Music Zone, then nothing) (the WH Smith has gone too)).  You are more likely to find the interesting oddities in these places but the selection is increasingly homogenized.  In Chester the other week I saw a copy of All About Steve in at least five outlets.  No I didn't buy it.  Instead I found the Gere starring remake of Breathless, Gigantic (with Zooey) and the German comedy Kokowääh.  No, me either but as with The Big Chill all those years ago, I was intrigued.

Now, after about fifteen years, I recently returned to boots and tables and was pleased to discover that nothing much has changed.  It's still entirely possible to spend the same amount of money and walk away with as many films, especially now that dvd is at about the point VHS was then, with sellers replacing or simply getting rid of their collections.  Plus there's somewhat less pressure it seems with some boxes and bags on floors left untouched for ages so there's plenty of time to look which is useful because of the secondary problem of trying to remember if a given title is available to stream at home.

Except, with so many other viewing options now it feels more redundant with the exception of material from boutique publishers and television series.  The thrill of the chase has all but dissipated, knowing full well that I'm unlikely to stumble on someone selling their BFI back catalogue or Criterions and I'm not sure I have the patience now to persist in returning on the off chance.  Plus children like me then are notable by their absence now which makes me wonder if the contemporary version of this trainee cineaste would have received their first introduction to You Can't Always Get What You Want via a cover version played on a church organ at a funeral.


August 31, 2015

A lament to the Enterprise of yesteryear by Simon Wardley

We're being hit by disruptive innovation! 


Our industry is being commoditised! 



Our business is complex! 


We're behind the curve!


We've created a strategy!


Hired the best!


They said they were experts!


Marketing is key!


And the future is private!


Or Enterprise!


Or Hybrid!


The problem is our culture!


But we have a solution!


And this time it'll be different!


An ode to a small lump of enterprise I found in my portfolio one midsummer morning ...

We're being hit by disruptive innovation! 
Our industry is being commoditised! 
Our business is complex!
We're behind the curve!

We've created a strategy!
Hired the best!
They said they were experts!
Marketing is key!
And the future is private! 

Or Enterprise!
Or Hybrid!
The problem is our culture!
But we have a solution!
And this time it'll be different!
Or I will rend thee in the gobberwarts with my blurglecruncheon, see if I don't!

... a lament to the Enterprise of yesteryear.

Anyone looking for a short cut out of this, I'm afraid I can't help you. However, I would suggest mapping your landscape ideally by going on a get fit regime and cleaning up the enterprise then afterwards applying some thought. Adapting to the changing technological-economic environment is not a choice.


China. by Feeling Listless

Music A couple of days ago, the Huff Post's supercut group posted a video of Donald Trump saying China a lot:



Which is pretty mesmerising and also dislodged an earwig as I remembered a snatch of lyric from an ancient song in which a male singer said something which sounded like "China, China, China, Chinaaaa...." After stumbling around Spotify for a while to little success I decided to ask Metafilter. Here are embeds of all the songs Metafilter thought it might be:

























Sadly, it's none of those. I've even asked Siri which sent me to an album of Chinese fairy tales. So either I've imagined it, or the song is still out there... I have a weird feeling it's Bollywood ... or from an advert.


August 30, 2015

Not-so-Invisible Ninjas by Charlie Stross

Or: Recent and Upcoming Debuts in Fantasy and Science Fiction... that just happen to be written by women.

Charlie invited me to come by and join in the posts helping those who may not already be in the know to find the wealth of writers who also happen to be female that they can't otherwise find when they are writing those excellent "where are all the women writers of fantasy and science fiction" posts.

I began to make a list of 'next-generation writers' who also happen to be women. (Since we don't write with our gender identities or genitalia, I figured it would be fine to not modify the word "writer," but for the search engines, I'll add it at in the end, so you know, they can find us. When they look.)

The problem seemed to be that there were so many of us who were otherwise hard to find! The entire list would crash the Internet out of pure hard-to-findness! And so Charlie set me a boundary, limiting me to 20, leaving off many excellent writers. I've thus kept this list focused on 2014 and 2015 English-language debut books in Science Fiction, Fantasy, and YA SFF. Many of these authors have new books coming out in 2015 and 2016 as well. I'll let the comments about those I've not put on this very short list stand as a reminder to you that we are NOT, in fact, hard to find.

  • Andrea Phillips - Revision (Fireside Fiction 2015) Science fiction
  • Zen Cho - Spirits Abroad (Fixi Novo, 2014) Linked short stories/Fantasy
  • Silvia Moreno Garcia - Signal to Noise (Solaris 2015), Fantasy/Slipstream
  • Ilana C. Myer - Last Song Before Night (Tor, 2015) Fantasy/Epic
  • Stephanie Feldman - Angel of Losses (Ecco, 2014) Historical Fantasy/Slipstream
  • Genevieve Cogman - The Invisible Library (Tor, UK) Fantasy/Alternate Worlds
  • Beth Cato - The Clockwork Dagger (Harper Voyager, 2014) Steampunk
  • Alyc Helms - The Dragon of Heaven (Angry Robot, 2015) Fantasy
  • Karina Sumner-Smith - Radiant (Talos, 2014) Fantasy
  • Stacey Lee - Under a Painted Sky (Putnam, 2015) Alt-Historical Western, fantasy
  • Sabaa Tahir - An Ember in the Ashes (Razorbill, 2015) YA Fantasy
  • Jacey Bedford - Empire of Dust - (Daw 2014) Fantasy
  • Susan Murray - The Waterborne Blade (Angry Robot 2015) Fantasy
  • Carrie Patel - The Buried Life (Angry Robot, 2015) Fantasy
  • Heather Rose Jones - Daughter of Mystery (Bella, 2014) Romance/Historical Fantasy/Queer
  • Nicola Yoon - Everything, Everything (Delacorte, 2015) YA Science Fiction
  • A.C. Wise - The Ultra Fabulous Glitter Squadron Saves the World Again (Lethe 2015) Linked short stories/Sci-fi/Queer
  • Monica Byrne - The Girl in the Road (Crown, 2014) Science Fiction
  • Camille Griep - Letters To Zell (47 North, 2015) Fantasy
  • me - Updraft (Tor, 2015) Fantasy

As I stipulated above, this list is defined purely by time, debut-status, and the number 20.

I'd love to add the writers who debuted in the years before us - including but not in any way limited to: N.K. Jemisin, Ann Leckie, Marjorie Liu, Alaya Dawn Johnson, Jodi Meadows, Genevieve Valentine, Justina Ireland, Jaime Lee Moyer, Stina Lecht, Jacqueline Koyanagi, V.E. Schwab, Mur Lafferty, Nene Ormes, Sarah McCarry, Leah Bobet, Natania Barron, Aliette de Bodard, Emma Newman, Alyx Dellamonica, Jaye Wells, Emily St. Jon Mandel, Kameron Hurley, Charlie Jane Anders...

AND the writers who came before that, including Nnedi Okorafor, Elizabeth Bear, Nisi Shawl, Kate Elliot, Kandace Jane Dorsey, Jo Walton, Martha Wells, Laura Anne Gilman, Amanda Downum, Gwenda Bond, Suzanne Collins, Nalo Hopkinson, Mary Robinette Kowal, Sarah Monette, Naomi Novik, Caitlín R. Keirnan, Rae Carson, Linda Nagata, Catherynne Valente, Kelly Link, J.K. Rowling,...

And those who came before that: Emma Bull, Judith Tarr, Elizabeth Lynn, Jo Clayton, Robin Hobb, Suzy McKee Charnas, Pamela Dean, Ellen Kushner, Brenda Cooper, Tanya Huff, Janet Morris, Robin McKinley, Michele Sagara, Tricia Sullivan, Delia Sherman, Sherwood Smith, Jessica Amanda Salmonson, Karen J. Fowler, Cecelia Holland, Nicola Griffith, CS Friedman ...

And the Grands and Great Grands and so on, like Pat Cadigan, Joan D. Vinge, Margaret Atwood, Kate Willhelm, Jane Yolen, Connie Willis, Andre Norton, Nancy Kress, Ursula K. Le Guin, Octavia Butler, Lois McMaster Bujold, Doris Pischeria, C. L. Moore, Carol Emshwiller, Leigh Brackett, Joanna Russ, James Tiptree Jr., Anne McCaffrey, Diana Wynne Jones, Joan Aiken, C. J. Cherryh, Andre Norton ... all the way to Mary Shelley and beyond. AND everyone here: http://www.womeninsciencefiction.com/?page_id=54, here: https://www.goodreads.com/list/show/6934.Science_Fiction_Books_by_Female_Authors, and here: https://www.goodreads.com/list/show/38909.Speculative_Fiction_Classics_pre_1980_by_Female_Authors.

AND the coming wave of 2016: here are just a few - Ada Palmer, Laura Elena Donnelly, Mishell Baker, Malka Older... And the editors. And the critics. And the publishers.

And and and... (honestly, I asked five friends to list their favorites and after fifteen minutes had to beg them to stop because my buffers overflowed.)

Oh my goodness, you would think it hard to find women writing fantasy and science fiction given those blog posts and articles.

BUT IT'S NOT.


August 29, 2015

Permission. by Feeling Listless

History The New York Times has an obituary for Amelia Boynton Robinson, who was a pivotal member of Martin Luthor King Jr's group.

"Mrs. Boynton Robinson was one of the organizers of the march, the first of three attempts by demonstrators in March 1965 to walk the 54 miles from Selma, Ala., to the capital, Montgomery, to demand the right to register to vote.

"As shown in “Selma,” the Oscar-nominated 2014 film directed by Ava DuVernay, Mrs. Boynton Robinson (played by Lorraine Toussaint) had helped persuade the Rev. Dr. Martin Luther King Jr., who would lead the second and third marches, to concentrate his efforts in that city."
There's one key moment in Selma which resonated with me while watching the film last night, when, during the second attempt to cross Edmund Pettus Bridge in Montgomery, MLK decides not to go. After a prayer, he turns the group, who've travelled from across country to support him, around and walks them back again. This Guardian archive collection describes it thus:
"In the aftermath of Bloody Sunday, King himself led a symbolic march across the bridge once again. While demonstrators were more determined than ever to proceed, federal protection was needed if they were to make it to Montgomery safely. Stopped by police, the marchers kneeled and prayed, then turned around and retreated back into Selma."
The King Encyclopedia has further details.  A key character and story point then becomes the question of why this happened, why Luther King made this choice something which I haven't been able to find an answer for.  It's not really explained in the film, other than that there were safety concerns and that King received a message from God.

But my understanding, my takeaway, is that King decided not to go, either consciously or otherwise, because he was effectively being given permission by the white folk.  As portrayed in the film, the police who at the first attempt in Bloody Sunday had beaten and tortured the marchers were stepping aside to let them through.

The only acceptable scenario King would have had for marching would have been if the road had been empty ahead anyway.  But then of course, if that had been the case, there wouldn't have been any reason to march in the first place.

The film doesn't make this explicit.

But as continues to be the case, in terms of both race and gender, the fight for equality and human rights remains a process of convincing those in power to give a permission there shouldn't be any question of them being in a position to decide to give in the first place.  They shouldn't even be in the way.


ImageMagick and FFmpeg: manipulate images and videos like a ninja by Zarino

ImageMagick (accessible via the command line program convert) is a software suite for converting and editing images. And FFmpeg (aka ffmpeg) is pretty much the same but for videos.

And, oh my God, they’re handy.

I probably use one or other of the tools every week – either at work with mySociety, or personally, converting videos for the media server on my Synology NAS.


For a while, I had a text note on my laptop with a bunch of my most commonly used convert and ffmpeg command line snippets. But open is better (and not to mention, easier to find) so here it is. I’ll update the list over time, as I discover new tricks. If you have any suggestions, tweet them to me!

Resize an image to fit within the specified bounds

convert before.png -resize 500x500 after.png

Concatenate a series of images (before1.png, before2.png…) into an animated gif

convert -delay 100 -loop 0 before*.png after.gif

-delay is specified in 1/100ths of a second, and a -loop value of 0 means loop forever.

Convert one media format into another

ffmpeg -i before.avi after.mp4

The input and output formats are auto-detected from the file extensions.

This command even works with weird formats like WMV and loads of others, even on a Mac:

ffmpeg -i before.wmv -c:v libx264 -c:a libfaac -crf 23 -q:a 100 after.mp4

Because mp4 is a so-called wrapper format, you can choose the specific video and audio codecs via -c:v and -c:a respectively. In this case we’ve asked for H.264 video and AAC audio. -q:a specifies the AAC audio quality, and -crf is the H.264 video quality.

Extract the audio from a video

ffmpeg -i before.avi -vn -ab 256 after.mp3

-vn disables video output, and -ab specifies the audio bit rate.

Convert all the FLAC files in the current directory into MP3s

for i in *.flac; do ffmpeg -i “$i" -q:a 0 "${i%.flac}.mp3"; done

-q:a specifies MP3 audio quality, where 0 is a variable bit rate of 220–260 kbit/s. ${i%.flac} is a Bash operator that returns the variable i without the .flac at the end.


A Cepheid Anomaly by Astrobites

Title: The Strange Evolution of Large Magellanic Cloud Cepheid OGLE-LMC-CEP1812

Authors: Hilding R. Neilson, Robert G. Izzard, Norbert Langer, and Richard Ignace

First Author’s Institution: Department of Astronomy & Astrophysics, University of Toronto

Status: Accepted to A&A

Figure 1: The Cepheid RS Puppis, one of the brightest and longest-period (41.4 days) Cepheids in the Milky-Way.  The striking appearance of this Cepheid is a result of the light echoes around it. Image taken with the Hubble Space Telescope.

Figure 1: RS Puppis, one of the brightest and longest-period (41.4 days) Cepheids in the Milky-Way. The striking appearance of this Cepheid is a result of the light echoes around it. Image taken with the Hubble Space Telescope.

It’s often tempting to think of stars as unchanging—especially on human timescales—but the more we study the heavens, the more it becomes clear that that isn’t true. Novae, supernovae, and gamma-ray bursts are all examples of sudden increases in brightness that stars can experience. There are also many kinds of variable stars—stars that regularly or irregularly change in brightness from a variety of mechanisms. Classical Cepheid variables are supergiant stars that periodically increase and decrease in luminosity due to their radial pulsations. They are stars that breathe, expanding and contracting like your lungs do when you inhale and exhale. Their regular periods, which are strongly related to their luminosity by the Leavitt law, make them important for measuring distance. Despite their importance in cosmology (as standard candles) and stellar astrophysics (by giving us insight into stellar evolution), there is still a lot that we don’t understand about classical Cepheid variables. One of the biggest mysteries that remains in characterizing them is the Cepheid mass discrepancy.

The Cepheid mass discrepancy refers to the fact that, at the same luminosity and temperature, stellar pulsation models generally predict that the Cepheids will have lower masses than stellar evolution models suggest they would. Several possible resolutions to the Cepheid mass discrepancy have been proposed, including pulsation-driven stellar mass loss, rotation, changes in radiative opacities, convective core overshooting in the Cepheid’s progenitor, or a combination of all of these. Measuring the Cepheid’s mass independently would help us constrain this problem, but as you might imagine, it’s not easy to weigh a star. Instead of a scale, our gold standard for determining stellar masses is an eclipsing binary.

An eclipsing binary system is just a system in which one of the orbiting stars passes in front of the other in our line of sight, blocking some of the other star’s light. This leads to variations in the amount of light that we see from the system. Because the orbits of the stars must be aligned just edge on to us this happen, eclipsing binaries are quite rare discoveries. However, when we do have such a system, we know with exactly what angle of inclination we are observing it. This makes it possible for us to accurately apply Kepler’s laws to get a measurement for the mass. Eclipsing binaries are highly prized for this reason (they’ve also gained some attention for being a highly-accurate way to measure extragalactic distances, but that’s another story altogether).

Cepheids in eclipsing binary systems are even rarer—there are currently a total of four that have been discovered in the LMC. One has been discussed on astrobites before (it’s worth looking at the previous bite just to check out the crazy light curve). Since there are so few, and since their masses are so integral to understanding them and determining their basic properties, it’s even more important to study each system as carefully as we can to help us solve these stellar mysteries. The authors of today’s paper take a close look at one of the few eclipsing binary systems we have that contains a Cepheid. 

Screenshot 2015-08-28 07.58.08

Figure 2: Figure 1 from the paper, depicting the stellar evolution model for the 3.8 solar mass Cepheid and its 2.7 solar mass red giant companion. The blue and orange shapes show the regions of the Hertzsprung-Russell diagram for each star that is consistent with its measured radius.

Unfortunately, rather than helping us, the subject of today’s astrobite, CEP1812, seems to cause more problems for us. Stellar evolution models indicate that the Cepheid appears to be about 100 Myr younger than its companion star, a red giant (and we expect them to be the same age). Figure 2 shows the evolutionary tracks of the red giant and the Cepheid. Previous papers have suggested that the Cepheid could have captured its companion, but today’s authors believe that this Cepheid could be even stranger—it may have formed from the merger of two smaller main sequence stars. This would mean that the current binary system was once a system of three stars, with the current red giant being the largest of the three. The merger would explain the apparently-younger age of the Cepheid because the resulting star would then evolve like a star that started its life with a mass the sum of the two merged stars, but it would look younger. The red giant, which would have been the outermost star, could have played a role in inducing oscillations in the orbits of the two smaller stars that caused them to merge.

The authors test this proposal by evolving a stellar model of two smaller stars for about 300 Myr before allowing them to merge. The mass gets accreted onto the larger of the two stars in about 10 Myr, and the resulting star then evolves normally, but appears younger because the merger mixes additional hydrogen (lower mass stars are less efficient at hydrogen burning) into the core of the star, which matches the observations.

The authors argue that if CEP1812 is formed from the merger of two main sequence stars, it would be an anomalous Cepheid. Anomalous Cepheids have short periods (up to ~2.5 days), are formed by the merger of two low-mass stars, usually only about ~1 solar mass, have low metallicity, and are found in older stellar populations. There they stand out, since these Cepheids end up being much larger and younger than the surrounding ones. However, CEP1812 is about 3.8 solar masses and also at a higher metallicity than most anomalous Cepheids, making it a highly unusual candidate. CEP1812’s period-luminosity and light curve shape are also consistent with both classical Cepheids (which are the kind we use in measuring distances) and with anomalous Cepheids.

If CEP1812 is an anomalous Cepheid, it straddles the border between what we think of as “classical” Cepheids and what we think of as “anomalous” Cepheids. This would make it a risky Cepheid to use for cosmology, but interesting for bridging the gap between two different classes of Cepheids. The possibility of it being an anomalous Cepheid is just one resolution to its unique properties. However, if it does turn out that CEP1812 not just another classical Cepheid, it could be the first of a fascinating subset of rare stars that we haven’t studied yet. Ultimately it’s still too soon to tell, but the authors have opened up an interesting new possibility for exploration. 

 


August 28, 2015

How I learned to stop worrying and love the concept of punitive slating.... by Charlie Stross

Hi ho, Elizabeth Bear here, coming to you with a special report from deep in the wilds of eastern central North America, just underneath the left end of that wobbly looking blue bit that looks kind of like a kersplotchy asterisk. And I'm here at Charlie's Diary today to talk about slate voting for the Hugos, and what some potential developments of its tactical use mean to the individual artist.

By slate voting, in this case, I mean the practice of some person, generally an internet pundit or personality of some sort with an interest in the outcome of the Hugos, presenting an organized "slate" of nominees consisting of exactly as many nominees as there are nomination slots on the ballot. This, if the organizer can manage to procure a fairly small minority of voters, can have the effect of driving all disorganized (that is to say, non-slated) works off the ballot, because those non-slated candidates are being simply chosen by people who liked their work best out of the available options and weirdly enough, different people tend to like different things.

Slate or bloc voting is not technically forbidden under the rules. But I think it's damned poor sportsfanship, and the Hugo outcome indicates that an overwhelming majority of my fellow fans, of nearly all political stripes, agree.

This is what happened with the Hugos this year. The Hugos have a built-in nuclear option fail safe, the "No Award," option, by which the voters (self-selected members of the World Science Fiction Society, who pay a membership fee that includes voting privileges) can deal with either works they deem unworthy of the nomination, perceived cheating, or both. It was deployed heavily this year to counteract the slates. As a result of the slates a number of works were never given a chance at consideration—including a very good story by the late Eugie Foster that may have been her last chance at a Hugo nomination—and as a result of the "No Award" option, a number of Hugos simply were not handed out.

While there are some rules changes in the works to make it all more difficult to pull off in the future, they will take an additional year to ratify because that's the way the World Science Fiction Society constitution works, so the 2016 Hugos have the same vulnerabilities as the 2015 ones did.

I'm not particularly concerned at this juncture by the Rabid Puppies' threat to "No Award" every category in the Hugos, because in my opinion they just can't marshal the votes. (It takes a lot more individual ballots to force a "no award" than it does to get something on the ballot in the first place.)

And I'm not particularly concerned by a repeat performance of an all-slate ballot, because I suspect that it'll be hard for the people who failed to push a slate winner through in 2015 to muster a lot of interest from the people they recruited this year to drop an additional $40 to vote next year. (I could be wrong. I often underestimate the human capacity for spite. But I wouldn't do it, in their shoes, over something I have no particular emotional investment in.)

Also, with a little luck, most of the record ~6000 Hugo voters (or even better, most of the record ~11,000 Worldcon members!) this year will turn out and nominate and vote, which would be an absolute game-changer for the awards, their legitimacy, and their relevance. It could be a renaissance for the Hugos, in point of fact, and the deliciousness of that emerging out of attempts to co-opt or destroy the awards is indescribable.

There's my preference right there: If you love science fiction and fantasy and you have the money for a supporting membership, or if you already signed on in 2015, please please please if you read something you like, nominate it. You don't have to nominate in all categories. You don't have to read everything published. The nomination process is specifically designed to create a consensus out of the partial knowledge of many people, and the more people who participate, even with partial knowledge, the better it works.

And once you've nominated something, tell your friends you liked it. I have absolutely no problem with Hugo rec lists, Hugo "Here's my ballot" posts, or even Hugo "Here's what I have eligible this year" posts. Those are not slates, and they don't concern me in the slightest, because they do not act to spoil and thwart the process in the way that slates do.

There are two things I am concerned about. One is other concerned groups in fandom mustering and voting their own slates, in direct competition with the Sad and Rabid Puppy slates. (Assuming there is going to be a Rabid Puppy slate next year, rather than just an attempt to block vote No Award on every category, as threatened. Based on the existing evidence, the Rabid Puppies and internal consistency are not exactly chocolate and peanut butter.)

I think this is a terrible idea, for exactly the same reasons I think the Sad Puppy and Rabid Puppy slates are a terrible idea, and I cannot support it.

The other is the concept of punitive slating. I have talked with a lot of the Sad Puppy voters, and I really believe that many of them were acting in good faith and voting for work they really liked. I don't believe they'd go in for this.

The Rabid Puppies, though, are self-declared reavers out to wreck the Hugos for everybody. I think their organizer Vox Day has made himself a laughingstock, personally—he's been pitching ill-thought-out tantrums in SFF since before 2004, and all he ever brings is noise. But he and his partisans seem to be too ego-invested to admit they're making fools of themselves, so they'll never quit.

So it's totally possible that the Rabid Puppy organizers and voters, in the spirit of burning it all down, would nominate a slate consisting of the sort of vocal anti-slate partisans who could conceivably swing legitimate Hugo nominations on fan support, having a track record of the same.

I'm talking about people such as our good host Charlie Stross, John Scalzi, George R.R. Martin, Patrick Nielsen Hayden, and myself. Or just, you know, people they hate—the categories overlap. The goal here would be to then attempt to either force us to withdraw or refuse nominations to prove our lack of hypocrisy, or for fandom to again No Award the whole process. This is the Human Shield option, which—in a slightly different application—is what led to the inclusion on the Rabid Puppy slate of uninvolved parties such as Marko Kloos, Annie Bellet, Black Gate, Jim Minz, and so on in 2015.

This possibility concerns me a bit more, but honestly, I think it's pretty easy to manage. First of all, I'm going to state up front that I will never willingly participate in a slate. If I learn that I have been included on a slate, I will ask to be removed, and I will bring as much force to bear on that issue as I legally can.

Additionally, I'm going to rely on the discretion of readers and fans of goodwill, who I think are pretty smart people. If you see my name on a slate, please assume that it's being done by ruiners to punish me, and that whoever put it there has ignored my requests to remove it. I have nothing but contempt for that kind of behavior, and I'm frankly not going to do anything to please them at all.

My colleagues, of course, are free to deal with the situation as they see fit, up to and including refusing nominations. As for me, well—while I reserve the right to turn down an award nomination at my discretion, I'm not about to be forced into it by the action of trolls and reavers. I expect my readers to be able to make up their own minds about my work, and decide for themselves if it's worthy of an award or not, and vote accordingly in a fair and sportsfanlike fashion.

I expect Charlie's fans—that would be you guys, reading this on his website—can manage to do the same.


New guest blogger: Fran Wilde by Charlie Stross

I'm still recovering from jet lag. But in a desperate attempt to hang on to your attention—and to continue the discussion on women in SF that kicked off here over the past month—I've invited another guest blogger, first-time novelist Fran Wilde. Her first novel, Updraft, debuts from Tor Books on September 1, 2015. Fran's short stories have appeared at Tor.com, Asmiov's Magazine, Beneath Ceaseless Skies, Uncanny Magazine, and Nature. Fran can program digital minions, set gemstones, and tie a sailor's knotboard. She also interviews authors about food in fiction at Cooking the Books, and blogs for GeekMom and SFSignal. You can find Fran at her website, Twitter, and Facebook: and, shortly, here.


The back of my head one day. by Feeling Listless



TV As part of the BBC's Pop Art season, the latest edition of What Do Artists Do All Day? focuses on Peter Blake and in particular his work on the Liverpool Biennial's Dazzle Ferry. The whole programme can be watched here for the next few weeks. You'll remember I attended the launch of Snowdrop in April and posted lots of photographs of it here but as you'll also have gathered from looking above this block of text, there I am taking said photographs in the programme, so the back of my head has now been on national tv.  Here's the photography I took at almost that exact moment:



And here's the shot I chose to put on the blog in which Blake looked directly at me:



I'm wearing the same coat I featured in this picture with Pete back in 2009.  Still going strong.  Still warm and much needed on that day which despite being sunny was also pretty chilly.


"I hate Bobby Davro. There you go, I've said it. Even my mum and dad hate Bobby Davro." by Feeling Listless



TV Glancing through the names of celebrities who're soon to enter the Channel 5 rendition of the Big Brother house trying to work out who these people are and what they want, I notice that one Bobby Davro will be joining them. Those of you with long memories will remember that Mr Davro was a key part of the very first seminal series of the show as the four remaining housemates were shown episodes of his television series quite logically as a reward for their abilities as impressionists.

When I was still watching the programme this was high on the list of my favourite moments (along with pretty much anything which happened between John Tickle and Nush in BB4) and here it still is on YouTube. Scroll through to 20:35 on the above embed to hear Anna Nolan's perfectly timed rant on the work of Mr Davro and later Darren as he realises who he is and then the later shots when he's disavowed of his own opinion.  Classic television.


Dropping Orion in the Desert: NASA Completes Key Parachute Test by The Planetary Society

NASA’s Orion spacecraft completed a key parachute test Aug. 26 at the U.S. Army Yuma Proving Ground in Yuma, Arizona.


August 27, 2015

The Starbucks on Bold Street has closed. by Feeling Listless



Commerce Quite suddenly. My last visit was last Thursday where I sat on the ground floor beneath the stairs near the Wood Street entrance and read that week's comics. Passing by this morning I saw the above. Asking at the check out in the Home Bargains next door but one, it seems have shut on Monday.

Now I know what a couple of you are thinking, and presumably poised to type into a social media text box, it's just an outlet of a multi-national company which doesn't pay as much tax as it should and there are still plenty of independent coffee shops in the area and here's a list of them, and yes all of that is true.

But I like Starbucks coffee and I liked this space which like the best outlets was modular.  The seating area at the front and beneath the stairs.  Half way up the stairs overlooking Wood Street and at the top with the massive table and easy access to the toilets.  Plus it tended to be less manically busy than some cafes, which was probably ultimately its downfall.

Like any space you visit regularly we have history and this blog has history with it. Not as much as I thought, but there's plenty of business here in which a visit to this Starbucks would also have been involved.  In any case, to memorialise, here's an archive of links to previous posts on this blog specifically about Starbucks on Bold Street.

When I entirely failing to deal with a private view at the venue which filled the space before Starbucks moved in (about 2000).

When one of the items I'd Bookcrossed in there was found in 2003.

When another of the items I'd Bookcrossed was found too.

When I went for a coffee with the best new person I met in 2003. Which I reflected upon in 2013.

When it became a third place in 2004.

When I badgered them about buying the acoustic version of Alanis's Jagged Little Pill (which was an exclusive in the US).

When I the Christmas blend for the first time in 2006.

When I read the first issue of the Buffy: Season Seven comic in March 2007.

When I into a Gingerbread Latte while listening to the Doctor Who audio The Auntie Matter in August 2013.

Bye then.


Change you can engage with by Goatchurch

If voting changed anything, they’d make it illegal — Emma Goldman.

So, if you want people to vote, they have to believe that it can change something.

The Labour Party is undergoing a sudden and spectacular revolution with hundreds of thousands of people signing up on the belief their vote will make a difference when they elect Jeremy Corbyn. No one saw this coming.

Just one month ago the former leader Tony Blair said that anyone who supported Corbyn should get a heart transplant.

Funnily enough, Blair only became party leader (and, by default, Prime Minister) because John Smith had a heart attack and died. Blair was then stupid enough to believe that he was there because of his awesomely crappy policies that caused so many people to quit the Labour Party he had to fund his 2005 election by selling seats in the House of Lords.

Voting in Scotland in a referendum was going to make a difference, and the turn-out there was massive.

But in the wider country there continues to be a problem with General Election where necessary change is not coming about and people are getting screwed.

Young people don’t vote because they know it doesn’t make a difference. The system is too skewed. The old people in the rural constituencies reliably root for the Tories and provide their base. The Tories return the favour by redistributing the wealth from the youth to their elders on a massive scale through rising house prices, tuition fees (after this older generation got educated for free), historically low wages, a rising retirement age, a declining pension (which doesn’t effect the current generation of pensioners), expensive public transport while car driving becomes cheaper, cuts in inheritance tax (how old are the “kids” when they actually get the money?), and huge bank bailouts to protect the savings of those with hundreds of thousands of pounds on deposit.

I met someone at a Citizen Beta event on Monday night who thought the real problem is that voting in the General Election is too inconvenient and can’t be done on-line. Others thought it was because Parliament’s procedures are too arcane and not user-friendly enough.

But maybe it’s the content.

Parliament is not relevant and it does not offer enough opportunities to change things that otherwise are not going to get changed.

Here’s how out-of-touch I am. I made a big proposal that Parliament should have an e-petitions site where, if we get a million electronic signatures, the motion we signed up to would be debated on the floor of the House and then be subjected to a binding vote by MPs.

I said that maybe an online petition site could perform this purpose of forcing Parliament to address matters of public concern that it prefers to ignore.

What good is petitioning them to do something they’re going to do anyway?

The essence of it has got to be antagonistic. It cannot be a matter for discretion. The Labour Party is going to get a new leader if he get the votes, whether they like it or not. It’s a numbers game. There is no discretion.

If you had a million signatures on a petition for a debate and binding motion to be held, then that debate — when scheduled — would begin with at least a million people interested in and engaged with the outcome of the process. They’re going to want to see that change is possible. If we’re only supposed to watch and take our medicine, why do they think we should bother?

Unfortunately, I have been so out of it that I didn’t realize that this official Parliamentary e-petitions site was already in existence and has been going for so long that there’s even been committee inquiries and reports into it!

My goodness!

There’s even been a reasonable review of its functioning by the Hansard Society from 2012.

One of the biggest petitions, which got 148,373 signatures in 2012, was CHEAPER PETROL AND DIESEL, BY ROBERT HALFON MP AND FAIRFUEL UK, which was effectively enacted. So we got higher VAT in place of fuel tax rises and therefore continued our dependence on private motor vehicles favoured by the old folk who already own their own home in the countryside which they enjoy driving to.

So what’s top of the petitions right now:

cannpetition

There it is right there, overwhelmingly supported by young people. Where is the promised debate? The MPs would rather not address this issue properly, so where is the organization demanding the debate be held, is properly conducted, and offers a real potential for change?

The Cheap petrol petition had its fairfueluk organization who presumably chased their thing up. What’s not going on here?

People don’t understand that Parliament is like a courthouse. It has huge powers, but it is institutionally reactive. It does not a will of its own. You have to use the procedures provided to take a case to it and drive it through.

The e-petitions are one such procedure.

The more these procedures get used effectively, the more they will work. They are like paths in the jungle. Paths are made by people walking on them. You build one short bridge across an impassable ravine in the middle, and the paths will be made by people traveling to it.

The cannabis issue has a very interesting recent legislative history. Read my blogpost from 2009 recounting it.

Short story version. Cannabis was on the road to decriminalization in 2003 by being downgraded to a Class C drug by vote in Parliament. The Daily Mail, read by angry old people, ran a sustained campaign against the policy backed up by deliberately false reporting (see blogpost for details). In 2008 Gordon Brown, one of our worst ever Prime Ministers, thought he could pander to this class of voters by suddenly changing it back to a Class B drug. It would have happened without a vote in Parliament had not one MP shouted “Object” at exactly the right moment when the speaker muttered the words “Motion number twelve”.

The issue still remains live. An Oral question in 2014 beginning with “What recent assessment [has the Minister] made of the potential medicinal benefits of cannabis?” received an unprecedented 8 public annotations on the TheyWorkForYou site.

This clearly is the issue that would make an excellent path breaker. The petition site is the bridge in the jungle. You’ve got 203,899 people signed up to cross it. All you need is to make them believe that by rights, morally, they should expect and be entitled to change on the other side.

Quite simply, they deserve to witness a debate and motion in Parliament to reclassify cannabis back to a Class C drug that would be binding on a vote.

If cannabis had this status for five years from 2003 to 2008 without any harm being done (aside from that which was made up and imagined by the Daily Mail), then we can have it again. And if a failed Prime Minister can reverse it on a whim, then it can only be fixed.

This is an easy battle, an early skirmish we should be encourage the generations to fight out right now on Parliamentary turf today. This one only has winners and the outcome doesn’t matter. It’s not a zero sum game. The youth can have their pot without going to jail, and old people won’t actually get their houses burnt down by imaginary paranoid schizophrenics who’ve smoked too many joints — no matter what the Daily Mail tells them to believe. They can just put up with it just like they’ve put up with other new stuff, like gay marriage.

The young have got to get out and fight on this one.

Once the youth get the expertise and the taste for victory, they’ll be able to move on to battles that really matter, like housing, education, environment and employment where we have got to stop the older generation pulling the ladder up after them and selfishly clinging onto these assets far beyond their needs and squeezing out the economic prospects for the next generation.

There once was exponential economic growth which provided the younger generation with room to exist. But since that’s not happening now, the older generation has got to be forced to start giving back. They don’t want to. They don’t know they need to. And the political tools to do it have to be made out of what is already there.


From Large to Small: Astrophysical Signs of Dark Matter Particle Interactions by Astrobites

Title: Dark Matter Halos as Particle Colliders: A Unified Solution to Small-Scale Structure Puzzles from Dwarfs to Clusters
Authors: M. Kaplinghat, S. Tulin, H.-B. Yu
First Author’s Institution: Department of Physics and Astronomy, University of California, Irvine, CA

 

 

The very large helps us to learn about the very small, as anyone who’s stubbed a toe—rudely brought face to face with the everyday quantum reality of Pauli’s exclusion principle and the electrostatic repulsion of electrons—knows.  Astrophysics, the study of the largest things in existence, is no less immune to this marvelous fact. One particularly striking example is dark matter. It’s been a few decades since we realized that it exists, but we remain woefully unenlightened as to what this mysterious substance might be made of. Theories on its nature are legion—it’s hot! it’s cold! it’s warm! it’s sticky! it’s fuzzy! it’s charged! it’s atomic! it’s MACHO! it’s WIMP-y! it’s a combo of the above!

How are we to navigate and whittle down this veritable circus of dark matter particle theories? It turns out that an assumption about the nature of the subatomic dark matter particle can lead to observable effects on astrophysical scales.  The game of tracing from microphysics to astrophysics has identified a clear set of dark matter properties: it’s cold (thus its common appellation, “cold dark matter,” or CDM, for short), collisionless, stable (i.e. it doesn’t spontaneously decay), and neutrally charged (unlike protons and electrons). CDM’s been wildly successful at explaining many astrophysical observations, except for one—it fails to reproduce the small scale structure of the universe (that is, at galaxy cluster scales and smaller). Dark matter halos at such scales, for instance, are observed to have constant-density cores, while CDM predicts peaky centers.

What aspect of the dark matter particle might we have overlooked that can explain away the small scale problems of CDM?  One possibility is that dark matter is “sticky.” Sticky dark matter particles can collide with other dark matter particles, or are “self-interacting” (thus the model’s formal name, self-interacting dark matter, or SIDM for short). Collisions between dark matter particles can redistribute angular momentum in the centers of dense dark matter halos, pushing particles with little angular momentum in the centers of peaky dark matter halos outwards—producing cores.  If you know how sticky the dark matter is (quantitatively described by the dark matter particle’s self-interaction cross section, which gives the probability that two dark matter particles will collide) you can predict the sizes of these cores.

The authors of today’s paper derived the core sizes of observed dark matter halos ranging in mass from 10^9 to 10^15 solar masses—which translates to dwarf galaxies up through clusters of galaxies—then derived the self-interaction cross sections that the size of each halo’s core implied. This isn’t particularly new work, but it’s the first time that this has been done for an ensemble of dark matter halos.  Since halos with different masses have different characteristic velocities (i.e. velocity dispersions), this lets us measure whether dark matter is more or less sticky at different velocities.  Their range of halo masses allowed them to probe a velocity range from 20 km/s (in dwarf galaxies) to 2000 km/s (in galaxy clusters).

And what did they find? The cross section appears to have a velocity dependence, but a weak one. For the halos of dwarf galaxies, a cross section of about 1.9 cm^2/g is preferred, whereas for the largest halos, those of galaxy clusters, they find that a cross section that’s an order of magnitude smaller—about 0.1 cm^2/g—is preferred. There’s some scatter in the results, but the scatter can be accounted for by differences in how concentrated the dark matter in each halo is (which depends on how it formed).

But that’s just the tip of the iceberg.  The velocity dependence can be used to back out even more details about the dark matter particle itself. To demonstrate this, the authors assume a simple dark matter model, in which dark matter-dark matter interactions occur with the help of a second particle (the “mediator”) that’s comparatively massless—the “dark photon” model. Under these assumptions, they predict that the dark matter particle has a mass of about 15 GeV, and the mediator has a mass of 17 Mev.

These are exciting and illuminating results, but we are still a long ways from our goal of identifying the dark matter particle.  The authors’ analysis did not include baryonic effects such as supernovae feedback, which can also help produce cores (but may not be able to fully account for them), and better constraints on the self-interaction cross section are needed (based on merging galaxy clusters, for instance).  The astrophysical search for more details on the elusive dark matter particle continues!

 

 

 

Cover image:  The Bullet Cluster.  Overlaid on an HST image is a weak lensing mass map in blue and a map of the gas (as measured by x-ray emission by Chandra) in pink.  The clear separation between the mass and the gas was a smoking gun for the existence of dark matter. It’s also been intensely studied for signs of dark matter self-interactions.

Disclaimer:  I’ve collaborated with the first author of this paper, but chose to write on this paper because I thought it was cool, not as an advertisement!


Webcomic: Poetry in space by The Planetary Society

Take a delightful, pixelated journey with French artist Boulet as he explains his love for the "infinite void" of the "mathematical skies."


Amazon and the last man standing by Simon Wardley

I often talk about the 61 repeatable forms of gameplay in the market and I know I'm a bit behind on doing those posts. I don't normally stray off the path but I thought I'd cover a well known game called last man standing. The reason why I want to talk about this, is there seems to be continued misunderstanding about Amazon and what's likely to happen. Now there are two possible reasons - either I'm wrong or lots of other people are.

Hence, I'll put my stall forward.

Amazon is likely to be supply constrained when it comes to AWS and EC2. What I mean by this is that it takes time, money and resources to build data centres. You can't magic them out of the air. With AWS already doubling in physical size (or close to) each year, this creates considerable pressure and if AWS were to drop the price too quickly then demand will go up to outstrip supply (i.e. it just won't be able to build data centres fast enough). Hence Amazon would have to control pricing in order to control demand.

I know that people talk about AWS being a low margin business but I'll stick with older figures and say that Amazon is probably making a gross (not net) margin of 80%+.  Let us look at revenue and for this, I'll turn to an old model from my Canonical days (see figure 1) after which we will cover a couple of key points in time that are coming up in that model.

Figure 1 - Estimated of Forward Revenue Run rate.


By my past reckoning then by 2014, AWS would have a forward run rate of around $8Bn. Which means in 2015, it would make around $8Bn or more in revenue. Currently people are estimating at around $5-6Bn, so I count that as pretty damn good to get into the right order of magnitude. However, this is not about how accurate or inaccurate I might have been. This is about the steps and what roughly will happen.

1) In 2015, I expected AWS to clock a revenue of $8Bn+, a gross margin of 80%+, for Amazon still to be supply constrained and for a few examples of some large companies reliant on cloud (i.e. what we now call data centre zero companies)

2) In 2016, I expected AWS to clock a revenue of $16Bn+, a gross margin near to 80%, for Amazon still to be supply constrained, a very visible movement of companies towards using AWS and the market around AWS skills to heat up. I expected by the end of the year for the wheels to start coming off the whole private cloud market (which is why I've warned about this being the crunch time).

3) In 2017, I expected AWS to clock a revenue of $30 Bn+, a gross margin near to 80% and Amazon still to have to control pricing. However, by the end of the year I expected this supply tension to reduce as the growth rate would show signs of levelling. This will provide more opportunity to reduce pricing to keep physical growth to doubling. I expect AWS skills to be reaching fever pitch and the wheels to be flying off the private cloud market.

4) In 2018, I expected AWS to clock a revenue of $50Bn+. I expected gross margin (and prices) to start coming down fairly rapidly as Amazon has significantly more price freedom (i.e. is far less price constrained than is currently the case). Data centre zero companies will become prevalent and there will still be a fever pitch around AWS skills.

5) In 2019, I expected AWS prices to be rapidly dropping, the growth rates to continue levelling, the fall-out to start biting into hardware competitors, the private cloud industry to have practically vanished and the remaining laggards to be making a desperate dash into cloud.

6) By 2020, the game is not only all over (last chance saloon was back in 2012) but we start chalking up the casualties. 

This doesn't mean there won't be niches - there will be and it's in these spaces that some open source efforts will hopefully hide out for future battles. This doesn't mean that some geographic regions won't try and hold out for spurious reasons - they will and at the same time harm their own competitive industries. This doesn't even mean I think my own figures or timing will be right, remember this model is ages old. I'm no fortune teller and at best I view it as being in the right direction. However, until someone gives me a better direction then this is the one that I've stuck with and so far, it seems to be fairly close.

Oh, and the last man standing? Well, in the last few years of the model when the price is dropping then it is all about last man standing. Many competitors won't be in a position to cope with how low the prices will go. The economies of scale will start to really tell here. Many will fall and it won't be gentle and graceful like. It'll be more brick like as in brick fired from a howitzer pointing downwards on the top of a building.

P.S. Before someone tells me the big hardware vendors are going to make a difference in infrastructure ... please don't. It's over. It has been over for sometime. Even if I had $50 Bn, I need to build the system, build the team, build the data centres before I launched and at any reasonable scale (even with using acquisition as a short cut) I'd be talking two years+ at lightning fast speed. I'd be walking into this market as a well funded startup against a massive behemoth who owned the ecosystem. Even those ex-hardware vendors with existing cloud efforts have too little, too late. No amount of money is going to save them here. These companies are just going through the motions of hanging on for as long as they can. There's a platform play but that's a different post.

P.P.S There will be some cloud players left - AWS will dominate followed by MSFT and then Google and a player like Alibaba. There'll be some jostling for position and geographic advantages.


August 26, 2015

The Open Source Cloud, start playing the long game. by Simon Wardley

Back in 2007, I gave a keynote at OSCON on the future importance of open source and open standards to create competitive utility computing markets (i.e. the cloud). We had a chance for an early land grab to make that happen in what is called Infrastructure as a Service but we lost that battle to AWS (and to a lesser extent MSFT and Google). There are numerous reasons why, far too many to go through in this post.

Just because the battle was lost, doesn't mean the war was. Yes, because of the punctuated equilibrium, we're likely to see a crunch in the 'private' cloud space and near dominance of the entire space by AWS with MSFT following. Yes, Amazon plays a very good ecosystem game and they are a tough competitor. However, in about 10-15 years in what will feel like the wilderness, we will get another opportunity. In much the same way that Linux clawed its way against the near total domination of Microsoft. There are numerous reasons for this, again too many to go through in this post and of course, there could be many twists and turns (e.g. the somewhat unlikely open sourcing of AWS technology).

For the time being, the open source cloud world (and yes by that I mean systems like OpenStack) need to hunker down, to firmly entrench itself in niches (e.g. network equipment), to build up and mature and prepare for the long fight and I do mean a LONG fight. A couple of encouraging signs were @jbryce comment at OpenStack SV 2015 on "having a reason" to build a cloud and not just because it's cool technology along with the discussion on maturity vs adoption of technology. This was good. But afterwards some of the conversations seemed to slip into "the path to Cloud Native", "democratising IaaS", "a platform for containers" (an attempt to re-invent again but around Kubernetes), "the problem is you" (as in IT depts not adopting it), "open source is a competitive advantage" (depends upon the context) and on and on.

You need to remember that for companies who might use these services their focus should (and increasingly will) be on meeting some need with speed (i.e. quickness of delivery), agility (applying more or less resources to the problem as needed) and efficiency (being cost competitive to others). Yes, things like mobility matter from the point of buyer / supplier relationships and in some niches there are location constraints. However, no business under competition is going to last if they sacrifice speed, agility and efficiency in order to gain mobility. To survive, any open approach needs to solve these problems and deal with any issue created by Amazon's huge ecosystem advantage. There is lots of good stuff out there such as Docker and in particular Kubernetes but the strongest plays today in the open source world are around the platform with Cloud Foundry and in the operating system where Ubuntu dominates with some competition from the challenger CoreOS. 

The battle for IaaS maybe lost but the war is far from over and yes, I hear that this or that paradigm shift will change the game again - oh, please don't bother.  The open source world will get another chance at the infrastructure game as long they focus on the long term. Probably the best route of attack in the long term starts with Kubernetes but that's another post.

P.S. People ask why I think CloudStack has a shot. Quite simply, the Apache Software Foundation (ASF) can play the long term game. I'm not convinced that after the crunch that OpenStack will be in such a position. We shall see.

P.P.S. People ask why am I so against OpenStack? This might surprise you but I'm not. However, OpenStack needs to hunker down against the storm and play the long term game. I'm not convinced by its earlier examples of gameplay that it either understands this or is willing to do anything about it.


"Bernice?" by Feeling Listless

Books Yes, indeed:


With Big Finish mixing new Who characters with their own and some old favourites (can you believe this now exists?) comes a further example of Doctor Who's "expanded universe" coming into contact with the revival.  To save you a click, @twilightstreets is Who legend Gary Russell, who has form on this.  When we was an editor on the Doctor Who Adventures strip way back when, Mephistopheles Arkadian from the Gallifrey audios made a cameo in one of the Tenth Doctor strips and with a plot point which eventually played out at Big Finish.  Expect a sarcastic line from Clara about "What is it with you and archaeologists".  Your move, Iris.


A Storm Of Stories by Charlie Stross

Filmmaker and comic author Hugh Hancock here again. Charlie's in mid-flight over the Atlantic at present, so I'm here to entertain you in his stead. And I brought statistics.

How many notable feature films can you think of that came out last year? Really good, solid movies?

Take a moment. Count. Maybe make a list.

How about really good TV shows, or computer games? Again, make a quick list.

I'll explain why we're doing all this list-making in a minute.

I've been considering the state of storytelling media in 2015 for a little while now, and one thing keeps cropping up in my personal media consumption: I'm consuming more media that wasn't released in the last year than ever before.

Indeed, my default reaction to something interesting arriving has become "I'll get around to it in a year or so".

So I started digging to find out why.

How Many GOOD Stories Are Being Released?

It's become a truism to say that there is a lot stories - in every storytelling artform - being created than has ever been the case before. But the sheer scale of the influx is still pretty astonishing. Since this time last year:
  • 9,992 new feature films have been completed, according to IMDB.
  • 5,000 new seasons of TV shows have been released. It's hard to figure out how many of them are fiction, but it's almost certainly over 1,000.
  • 5250 games were released on Steam alone last year. Across all platforms there wasn't a single month where less than 1,000 games were released, according to Metacritic.
  • 4,445 books were released on Amazon in the SF&F genre alone. Across all fictional genres, 36,099 novels were released since this time last year.
To put it in perspective, assuming 8 hours a novel, you'd need 32 years reading non-stop - no sleep, no food, no toilet breaks - to read this year's output of fiction alone. Now, my default assumption is that nearly all of those releases are crap. After all, they must be, right? If they were really good, I'd have heard of them. Fortunately, it's very easy to check that, as all the outlets above have ratings. I defined "crap" pretty harshly - anything that got less than a 70% rating:
  • 1,374 of the feature films released last year scored above 70%.
  • 208 of the 600 "Drama" TV series scored above 70%. That implies at least 333 fiction TV series scored over that number.
  • 877 of the computer games listed on Metacritic scored over 70%.
  • Amazon's advanced search only shows 100 pages of results: at page 99, all the SF books were still listed with well over a 70% score. So that's over 1,200 novels in SF&F
Even excluding the last one, which looks a bit dubious, those are some remarkable results. 877 games in the last year that are at least worth a look? Over a thousand feature films? OK, let's get harsh about this. How many of these are really notable? I reset the search results to anything getting above 8.5 / 85%, and tried again:
  • 72 feature films released in 2015 are rated above 8.5 on IMDB. That's not just blockbusters with massive fan communities, either - fan favourites like Age Of Ultron often scored less than 8.5.
  • 35 drama series were rated above 8.5. Of the ones I've watched, all seem to be appropriately rated - I might not like "Mr Robot", but it's pretty clear it's universally acclaimed.
  • 114 games were rated above 85 on Metacritic. A couple of those look dubious (Arkham Knight? Really?), but most of them clearly deserve to be there: Pillars Of Eternity, Kerbal Space Program, et al.
  • And finally, approximately 300 SF&F books are rated above 4.5 - actually closer to 90% - from this year's crop.
And this is why I asked you to make a list at the start. Those numbers are way higher than expected. Not the number of storytelling projects that are coming out - we know that there are tons of those, and we know why - but the number that are actually incredibly good. There are at least three seperate fields that I've heard being referred to as being in a "golden age" right now - books, TV and games. (That's from the perspective of the consumer, not the creator.). And this is where my perception that I'm consuming more of "last year's media" comes in.

A Growing Tidal Wave

So how does a narratophile - someone who loves stories - react to this? Well, let's do some crude modelling. Surveys put an American's total leisure time per day at 4.09 hours. Let's assume that our narratophilic exemplar spends fully 50% of that leisure time doing nothing but consuming media. Let's further assume that she doesn't bother with anything created before 2015, or puts her "older media" consumption into the other half of her time. In 2015, she has 750 hours. She's very picky, so she only bothers with media that fits our "truly excellent" criterion. And even then, she only fancies playing/watching/reading a small percentage of those admittedly excellent stories - let's say 35%. And furthermore, let's say she's a hardcore sci-fi fan, and simply isn't interested in reading any books outside the SF&F genre. Given all of that, in 2015 she has:
  • 49hrs of feature films, assuming 2hr average runtime.
  • 437hr of TV, assuming 10 hr for an average series.
  • 798hr of gaming, assuming 20hr of play time, on average, per excellent game. (It actually may be far longer, but we're being conservative here.)
  • 840hr of books, assuming 8 hours a book.
That's a total of 2124hr of entertainment to get through in one year. So what does she do? Well, she reads/watches/plays some of it, but she puts much of it on a list of things she'd like to read/watch/play in future. And then in 2016, assuming the same amount of output, she splits her choices between the stuff she has left over from 2015, and the new hotness in 2016. And in 2017, the same. And so on. Here's how that looks:
  • Year 1: 49 hr of feature films. 437hr of TV. 798hr of games. 840hr of books. 2124hr total. Has 750hr free time. Leaves 1374hr worth of consumption.
  • Year 2: 2124hr of new stuff + 1374hr of "leftover" from last year. Buys approx 3/5 as much new storytelling this year as last . At the end of the year, has 2748hr of media in her backlog.
  • Year 3: 2124hr of new stuff + 2748hr of leftover. Spends approx 2/5 as much on new stuff this year. Leaves 4122hr.
  • Year 4: 2124hr new, 4122hr leftover. Spends approx 1/3 as much on new stuff this year. Leaves 5496hr.
  • Year 5: 2124hr new, 5496hr leftover. Spends approx 1/4 as much on new stuff this year.
And that pattern's repeated over virtually all consumers. Sure, one person might be more into TV than games - but that just means that maybe they consume the top 60% of TV shows. Another's a gamer who doesn't care about all that "old media" - but they play all the top games. And all of this is complicated further by the fact that the number of shows, games, novels and films that we can consider "elegible" to be viewed is far greater than just the top-reviewed ones. The top-grossing films of the year so far - Jurassic World and Age of Ultron - were rated at 7.3 and 7.8 respectively on IMDB. Of the top-viewed TV shows, only one - Big Bang Theory - was at or above 85%. And the top-selling novels of the year so far are Go Set A Watchman (average review 3.6) and The Girl On The Train (4.0). Only in games were the two top-selling titles also top-reviewed - Metal Gear Solid: Phantom Pain and Grand Theft Auto V. It's pretty clear that what the average viewer - or even a narratophile like me - considers viewable, playable or readable is considerably wider than just the top-reviewed offerings. There's a massive, growing tidal wave of amazing content for all of us to consume. So what effects can we expect that to have?

The Impact Of The Awesome

Well, the first thing is obvious. Given this data, there's absolutely no question that there are hidden gems aplenty out there - games, films and TV shows which are good, but which aren't getting anywhere near mass exposure. We might assume that getting a really positive response from consumers will still lift you above the masses - indeed, I've heard the argument time and time again that no really good games, films, books, etc are being ignored. But a very brief look through the lists of media I've found above puts the lie to that. For example, how many of us have watched The Algerian, a massively-acclaimed, complex international spy thriller with a string of film festival awards? It's right up my alley and I'd never heard of it. What about Over The Garden Wall, a dark animated series in the style of "Adventure Time", starring the voices of Elijah Wood and Christopher Lloyd amongst others? Reviews are dribblingly enthusiastic, with an average rating of 9.2. Or Tomb Of Tyrants, a fascinating pattern-match / strategy cross-over game with 98% positive reviews on Steam and a small but very dedicated community? (EDITED - Juan Raigada pointed out my original example was flawed - thanks, Juan!) The backlog of genuinely fantastic storytelling that you've never heard of - and often no-one really heard of - and so quite often the creator's no longer creating or unable to get funding - is only going to grow, and it'll grow fast.

So what does let stories succeed? Well, I've written about the power of modern-day myths before, and that's a large part of it. Note that of the best-selling stories I mention above, 5 out of 7 are sequels. And obviously, marketing is a large part of it.

Another way that games creators, novelists, and no doubt soon filmmakers are trying to cut through the noise and get noticed is by being featured in sales or bundle packages - Humble Bundle, Steam Sale, and so on. But the sheer volume of content means that consumers are increasingly arbitraging their purchases to get the sale price. There's a subreddit called Patient Gamers, with 60,000 current subscribers, devoted to just this phenomenon - gamers who wait until games become cheap, because they "just haven't had the time to keep up with the latest releases." That has a double-whammy effect. Not only are your sales likely to be delayed, but where you might have originally expected to take in a $15 or $30 purchase price, you're now getting $5, $10 or less. So how are readers, watchers and gamers reacting to this? Well, we might expect that with the deluge of new material, we'd start to see people individualising their purchases more, heading into sub-sub-genres that better fit their tastes. But that doesn't seem to be happening. It's easiest to see this in films, where peak box office numbers remain as high as ever. It's the middle tier of filmmaking that has been hollowed out in the last couple of decades, not the top: 4 of the top 10 box office numbers of all time have happened in the last 5 years, and it's 5 out of 10 if we extend to 2009 and "Avatar". My theory - and it's only a theory - is that the deluge, plus the incorrect assumption that most of what's being created is now terrible, is meaning that people are actually sticking tighter to what they know. If there are only 20 SF novels a year from new authors, most SF fans will be willing to try a few of them. If there are 20,000 new SF novels, paradoxically, the sheer volume of choice and difficulty of knowing what to pick means that we just say "screw it" and re-read Accelerando instead.

What's The Future Of Abundant Stories?

So what's going to happen? Well, in the near future, it'll be ever harder for new voices to break in. As my fictional narratophile above shows, after 5 years of this sort of output even people who do consume new authors or directors will be spending 1/4 of what they normally would. On the upside, if you can hang in there for a few years as a new creator you'll see sales start to rise, even of your older stuff. It's absolutely no longer the case that 75% of the people who would ever buy your Thing will do so in the first year. More optimistically, I expect to see some sort of breakthrough on discovery in the near future. As I've demonstrated above, at this point it's trivial to find and recommend really great material that your audience may not have heard of. This is already, to a certain extent, the model that's keeping games blogs like Rock, Paper, Shotgun in business, and I can see it extending to other media. (It's notable that of my examples above, the unknown movie has about 50 votes. The unknown game has 4,000. The games world is genuinely already better at discovery, even if it still has a long way to go.) Unfortunately, for film and TV any kind of revolution in discovery will be incumbent on solving the entire distribution mess that's currently festering. Currently, as I've mentioned before, all the credible marketplaces for film and TV are a nightmare to get into. I couldn't even tell you how to watch "The Algerian", short of "Pirate Bay and hope". But sooner or later a big player is going to pull a Steam or a Kindle and just throw open the doors of a trusted platform to all comers in film or TV - and they'll make an absolute killing. And I suspect we'll see an increased segmentation in the landscape, but not along more narrowly-defined genre lines. People will be looking for fault-lines between which to pitch their own personal fan tents, and ways to differentiate the media they do want to consume from the media they don't. We're already seeing genres segment along political lines, of course. In film some of the most successful indie directors are those serving niche communities that don't get much love from the mainstream - faith-based, LGBT, etc. Rather than a sub-nicheing, I think we'll see more of this sort of segmentation, both around core values like sexuality, religion and politics, and around practical issues like income, available time, and multiplayer preference. There's already effectively a "job simulator" genre in gaming - EVE Online, grindy MMOs, etc - and a rapidly growing "I have 15 minutes and I want to play a quick game of something" genre too. Hundreds more "practical genres" like that are waiting to be created. And in the long term? In the long term, we're going to be in a weird place. There will be more active storytellers producing media per head of the population than there have been for a few hundred years - arguably since the age of the Skald or the bards. We're already at a point where - just looking at the stats above - there is about one working novelist per 20,000 people in the English-speaking world, and about one game developer and one filmmaker per 60,000. Those numbers aren't going down, despite the difficulty of finding an audience. We might end up with a society where one in a thousand people is producing professional-quality, professional-length art of one kind or another. If we get Universal Basic Income, that'll put a rocket under the entire process. At that point, we really will be back to bards and skalds. With an average audience of about 500 people each, the obvious way for our future-storytellers to differentiate themselves will be by personalising their stories to their tiny audience - which is small enough for them to know each member by name. That might be tremendously freeing in some ways. As a roleplaying GM of 30+ years experience as well as a filmmaker, the personal nature of roleplaying is one of the things that keeps me in the hobby. It's far easier to design a story for a small group of people whom you know than a large, impersonal mass. And for those who want to sit at the fire and hear these stories, rather than craft them, it's going to be an unprecedented age of having narrative tailored specifically for you. Imagine an MMORPG with only 300 players, for example, or a feature film series that reflects all your preferences and concerns. It won't be a case of boggling when a TV show manages to get the basics of hacking right - there'll be an entire canon of drama series focused around Stallmanesque characters fighting for freedom of software, tailored specifically for people who really care about those issues. (You might be wondering how artists get paid enough to live in this model. My answer is "Other than UBI, no real idea". However, it's worth noting that the cost of producing any storytelling medium except books is currently plunging downward phenomenally fast.) I hope that's the direction we're going in, anyway. Because the alternative's not too pleasant - a world where 99.99% of all artistic creation is unpaid, often expensive, and where most art is created by patronage or by people wealthy enough to not need to worry about their expenses. Or a world where somehow a Guild Of Storytellers manages to shove the genie back in the bottle, and contain the number of people who make stories, regardless of how many could, down to a managable level. What do you think? Where's storytelling headed in the next 10, 20, 50 years? If you'd like to read more of my insane predictions, you can find me at @hughhancock on Twitter, read my blog or follow my current projects via email.


On Diffusion and Evolution by Simon Wardley

I recently saw this tweet and unfortunately, despite good intentions, there's a lot wrong with it. I've taken the main image (unknown copyright) for figure 1 and I'll go through what is wrong.

Figure 1 - Evolution mixed with diffusion



The fundamental problem with this image is it conflates diffusion with evolution. Whenever we examine an activity, practice or form of data then yes, it tends to diffuse. But, it also evolves through the diffusion of ever more mature, more complete and more certain forms of the act. Hence in the evolution of an act there maybe hundreds if not thousands of diffusion curves involved. The problem you have with trying to tie diffusion curves to evolution is in trying to determine which diffusion curve you're on. 

For example, let us take an activity A[x]. Let us suppose it evolves through multiple diffusing instances of the act (e.g. if A[x] was telephony then A[1], A[2], A[3] and so forth would represent ever better phones). I've added these diffusion curves into figure 2 below.

Figure 2 - Diffusion of an activity A[x]


Now each of these diffusion curves can cover different time periods and different applicable markets. Each will have a chasm i.e. in the evolution of A[x] there will be many chasms to cross and not just one. So, when examining the question of early adopters to laggards then we have to ask, which diffusion curve are we on? The laggards of A[1] are not the same as the laggards of A[5].

The normal response is to say - "well, we will measure the overall one i.e. when it becomes ubiquitous". Here you have an immediate problem because if I ask the question whether gold is a commodity (i.e. well defined, understood, standardised) then most would respond yes. But if I ask the question "Does everyone own some gold?" then most would respond no. The problem is that ubiquity is to a market and so you can't just say "measure its ubiquity" because you need to understand the market first.

But how do you determine the appropriate market? Actually, this was the trick I discovered back in 2006 to 2007. As things evolve, they become more defined and certain and the type of publications associated with the act change. There are four basic types of publications show in figure 3.

Figure 3 - Publication Types.


So when something appears e.g. Radio then we first write about the wonder of radio, then how to build and construct a radio crystal set, then we move onto to differences between radios until finally being dominated by guides for use.  I used just over 9,000 articles to determine these four types and used this to develop a certainty axis (show in the figure above and developed from type II and type III publications) and a bit more detail on this is provided here and here.

Now, the transition from Type III to Type IV in the graph above is critical because this defines the point of ubiquity in a market. If I take this as the point of ubiquity and plot back through history over both ubiquity and certainty then you get the following (figure 4)

Figure 4 - Ubiquity vs Certainty

The figure above represents a large range of different activities from telephones to TV to fridges etc. If you now overlay the the different publication types (i.e. type I, II, III and IV) then you create the evolution curve (see figure 5). What drives this evolution is competition (supply and demand) and that's marked on as well.

Figure 5 - Evolution


When can now go back to our diffusion curves in figure 2 and plot them on the evolution curve. I've illustrated this onto figure 6 (nb. this particular graph is just an illustration, not based upon data)

Figure 6 - Diffusion on Evolution


So when we look at A[1] from a diffusion point of view we might have crossed the chasm and the laggards maybe joining but it's very much in the early stages of evolution. We know from the publication types that despite the act reaching close to 100% adoption of its market that the market is nowhere near evolved. But at A[5] the act is very evolved and we already know that we've reached the point of ubiquity in the market from the publication types. It might not be the case that everyone has this item but this is what the ubiquitous market for this item looks like and it is now a commodity.

Now with evolution I can add all sort of changing characteristics i.e. genesis is very different from commodity (see figure 7). So for example, I know those activities or components in the genesis phase are uncertain, rare, risky, a point of differentiation, poorly understood, chaotic, deviates from the past, a source of worth, rapidly changing etc etc.

Figure 7 - Properties


So, I understand what the original image and tweet was trying to convey but alas - as simple and as seductive as it sounds, it's just plain wrong.  You can't just mix diffusion and evolution together in that manner. For those wanting to use the diagrams above, they all date from 2007 onwards and are creative commons licensed. The original work (i.e. data collection and testing) was done in 2006 & 2007 and the concepts actually date back much earlier in case you're interested (e.g. I was using the "pattern" of evolution back in '04/'05 though at that time it was just a noticed pattern rather than something with substance).


SETI Near and Far – Searching for Alien Technology by Astrobites

PAPER 1

PAPER 2

 

Think like an Alien

Without a doubt, one of the most profound questions ever asked is if there are other sentient, intelligent lifeforms in the Universe. Countless books, movies, TV shows, and radio broadcasts have fueled our imagination as to what intelligent alien life might look like, what technology they would harness, and what they would do when confronted by humanity. Pioneers such as Francis Drake and Carl Sagan transformed this quandary from the realm of the imagination to the realm of science with the foundation of the Search for Extraterrestrial Intelligence (SETI) Institute. The search for extraterrestrial intelligence goes far beyond listening for radio transmissions and sending out probes carrying golden disks encrypted with humanity’s autobiography. Some of the other ways astronomers have been attempting to quantify the amount of intelligent life in the Universe can be found in all these astrobites, and today’s post will be summarizing two recent additions to astro-ph that study how we might look for alien technology using the tools in our astrophysical arsenal. The aim of both these studies is to search for extraterrestrial civilizations that may have developed technologies and structures that are still out of our reach, and these technologies may have observable effects that we can see from Earth. This post provides a very brief overview of these studies, so check out the actual articles for a more in-depth and interesting read!

Sailing through the Solar System

20140709_LightSail1_Space031

Figure 1. Artist rendition of a light sail.

The future of space exploration via rocket propulsion faces a dilemma. To travel interplanetary distances in a reasonable amount of time we need to travel really fast, and to go really fast rockets need lots of fuel. However, lots of fuel means lots of weight, and lots of weight means it takes more fuel to accelerate. One popular idea for the future of space travel is the use of light sails (see figure 1), which would use radiation pressure to accelerate a spacecraft without the burden of exorbitant amounts of chemical fuel. Though the sail could reflect sunlight as a means of propulsion, beaming intense radiation from a planet to the light sail could provide more propulsion especially at greater distances from the star (if the sail had perfect reflectivity and was located 1 AU away from a sun-like star, the solar radiation would only provide a force of about 10 micronewtons per square meter of the sail, which is about the force required to hold up a strand of hair against the acceleration of Earth’s gravity). Hopefully in the not-so-distant future, we will be able to use this technology for quick and efficient interplanetary travel. Intelligent life in our galaxy, however, may already be utilizing this means of transportation. But how would we be able to tell if someone out there is is using something like this?

Screen Shot 2015-08-25 at 2.06.52 PM

Figure 2. Diagram showing the likely leakage for a light sail system developed for Earth-Mars transit. The dashed cyan arrow shows the path of the light sail, and the beam profile is shaded in green. The inset shows the log of the intensity within the beam profile in the Fraunhofer regime. Figure 1 in paper 1.

The authors of paper 1 analyze this means of transportation and the accompanying electromagnetic signature we may be able to observe by studying a mock launch of a spacecraft from Earth to Mars. Without delving into too much detail about the optics of the beam, during part of the acceleration period of the spacecraft some of the beamed radiation will be subject to “leakage” missing the sail and propagating out into space (figure 2). Since the two planets the ship is travelling between would lie on nearly the same orbital plane (like Earth and Mars), the radiation beam and subsequent leakage would be directed along the orbital plane as well. However, like a laser beam the leakage would be concentrated on a very small angular area, and to have any chance of detecting the leakage from this mode of transportation we would need to be looking at exoplanetary systems that are edge-on as viewed from the Earth…exactly the kind of systems that are uncovered by transit exoplanet surveys like Kepler! Also, assuming an alien civilization is as concerned we are about minimizing cost and maximizing efficiency, the beaming arrays would likely utilize microwave radiation. This would make the beam more easily distinguishable from the light of the host star and allow it to be detectable from distances on the order of 100 parsecs by SETI radio searches using telescopes such as the Parkes and Green Bank Telescope.

Nature’s Nuclear Reactors

xZjfF2z

Figure 3. Artist rendition of a Dyson sphere.

Though most SETI efforts are confined to our own galaxy, there are potential methods by which we can uncover an alien supercivilization in a galaxy far, far away. As an intelligent civilization grows in population and technological capabilities, it is assumed that their energy needs will exponentially increase. A popular concept in science fiction to satisfy this demand for energy is a Dyson sphere (see figure 3). These megastructure essentially act as giant solar panels that completely encapsulate a star, capturing most or all of the star’s energy and using it for the energy needs of an advanced civilization. To get an idea of how much energy this could provide, if we were able to capture all of the energy leaving the Sun with an 100% efficient Dyson sphere, it would give us enough energy to power 2 trillion Earths given our current energy consumption. If this much energy isn’t enough, alien super-civilizations could theoretically repeat this process for other stars in their galaxy. Paper 2 considers this type of super-civilization (known as a Kardashev Type III Civilization) and how we may be able to detect their presence in distant galaxies.

The key to detecting this incredibly advanced type of astroengineering is by using the Tully-Fisher relationship – an empirical relationship relating the luminosity of a spiral galaxies to the width of its emission lines (a gauge of how fast the galaxy is rotating). If an alien super-civilization were to harness the power of a substantial fraction of the stars in their galaxy, the galaxy would appear dimmer to a distant observer since a large portion of its starlight is being absorbed by the Dyson spheres. These galaxies would then appear to be distinct outliers in the Tully-Fisher relationship, since the construction of Dyson spheres would have little effect on galaxy’s gravitational potential and rotational velocity, but decrease its observable luminosity. The authors of this study looked at a large sample of spiral galaxies, and picked out the handful that were underluminous by 1.5 magnitudes (75% less luminous) compared to the Tully-Fisher relationship for further analysis (figure 4).

Screen Shot 2015-08-25 at 9.49.07 AM

Figure 4. A Tully-Fisher diagram containing the sample of objects chosen in the study. The solid line indicates the Tully-Fisher relationship, with the y-axis as the I-band magnitude and the x-axis as the log line width. Numbered dots mark the 11 outliers more than 1.5 mag less luminous from the Tully-Fisher relationship, with blue, green, and red indicating classes of differing observational certainty (see paper 2 for more details). Figure 1 in paper 2.

To further gauge whether these candidates have sufficient evidence supporting large-scale astroengineering, the authors looked at their infrared emission. Dyson spheres would likely be efficient at absorbing optical and ultraviolet radiation, but would still need to radiate away excess heat in the infrared. In theory, if one of the candidate galaxies had low optical/ultraviolet luminosity but an excess in the infrared it could provide more credence to the galaxy-wide Dyson sphere hypothesis. However, in reality, this becomes a highly non-trivial problem that depends on the types of stars associated with Dyson spheres, the temperature at which the spheres operate, and the dust content of the galaxy (see paper 2 for more details). Needless to say, better evidence of large-scale astroengineering in a distant galaxy would require a spiral galaxy with very well-measured parameters be a strong outlier in the Tully-Fisher relationship. Though none of the candidates in this study showed clear signs of alien engineering, the authors were able to set a tentative upper limit of ~0.3% of disk galaxies harboring Kardashev Type III Civilizations. Though an extraterrestrial species this advanced is difficult to fathom, the Universe would be a very lonely place if humans were the only form of intelligent life, and this kind of imaginative exploration may one day tell us that we have company in the cosmos.


Three space fan visualizations of New Horizons' Pluto-Charon flyby by The Planetary Society

It has been a difficult wait for new New Horizons images, but the wait is almost over; Alan Stern announced at today's Outer Planets Advisory Group meeting that image downlink will resume September 5. In the meantime, a few space fans are making the most of the small amount of data that has been returned to date.


Outer Planet News by The Planetary Society

NASA's Outer Planet Analysis Group is currently meeting to hear the agency's current plans and to provide the feedback of the scientific community on those plans.


August 25, 2015

My Favourite Film of 1984. by Feeling Listless



Film  Despite my obvious love of film I've never really had a home cinema set up.  My screen sizes have slowly grown larger over time, I've graduated from VHS to dvd to blu-ray, but never 5.1 speakers or 7.1 speakers or any of that malarky and certainly no projectors, however longingly I've looked at them in the BOSE shop at Cheshire Oaks or in Richer Sounds.  Mainly cost but also space.  The rooms in the flat really aren't big enough to accommodate them and we have neighbours who might not be too pleased about having a subwoofer vibrating their ceiling.

Which isn't to say I didn't try and there was a year or several when I plugged my VCR into a hi-fi for the purposes of watching the Star Wars: Special Edition when my parents were away (like said, small flat) and it was at this moment I happened to watch Electric Dreams which I'd just bought on sell through video (in a clever pack which included the soundtrack on cassette) and found myself roundly disappointed because it didn't sound as I'd expected to the point that rather like Mike Figgis during some screenings of his Timecode, I began manipulating the sound live.

The key scene is commonly known as The Duel and it's the moment when newly sentient computer, Edgar "meets" his neighbour Madeline for the first time at least in sound, though its enough for him to fall for her (and me to be honest).  Having bought a vinyl of soundtrack when it was being sold off by the Central Library in town I had fixed in my imagination how I thought it would sound, with Edgar's electronic noodlings bursting from one speaker and Madeline's cello from another underscoring the distance between them physically, geographically and otherwise.



Find above an Spotify embed of the track as it appears on a compilation album though it's identical to the version on the soundtrack. Even listening through headphones, there's a palpable sense of different intelligences communicating from each of the speakers, talking to one another as they improvise around a Bach minuet.  Edgar falling for her, she for Miles his "user" (in more ways that one) and the man she perceives to be her neighbour.  Having imagined this exciting, pulsating piece and how it would issue out from the film, imagine my disappointment on hearing this:



It's fine but it lacks the urgency of the soundtrack version and of course it doesn't work in quite the same way because it's the job of a film's soundtrack to put the audience in the same room as the characters, especially if the music is diegetic, as it is here.  Plus by intertwining the two sounds together earlier, it underscores their emotional connection.  But for all of those rationalisations, I wanted to hear Madeline from one speaker, Edgar from the other.

On rewatching the film, I actually sat with the balance knob on the stereo attempting to recreate the moment manually, even attempting to play the cassette in conjunction with the image but they were out of synch.  I can't explain my obsession with this other than being a teenager but it was my first realisation that film soundtracks are sometimes, indeed usually, nothing like the films from which they hail, often because a musician's allowed to present his original ideas unfiltered.  In this case, arguably the pinnacle of Giorgio Moroder's career ...

... with the exception of Madeline's theme which is just ...



... spoiler warning.


FutureLearn Creative Coding Weeks 1-3 by Dan Catt

Circles

I realise now that I should probably have done weekly notes on my progress through FutureLearn's Creative Coding course. Processing the language used is one I've always ended with throwing my hands up and switching to JavaScript, just because it's so darn difficult to debug. It's designed for artists & designers who don't really code, so to have terrible debugging tools to figure out what's going wrong is something I've never figured out.

It normally goes: "I want to do line/vector stuff", in which case it's often easier to write SVG with JavaScript and export to Illustrator. You can probably code vectors in Illustrator directly with JavaScript but I haven't looked into it enough. Note to self: do that.

Or, if I want to do image stuff I can use JavaScript Canvas. Even if I was doing audio now I'd probably turn to either Max or JavaScript. The debug tools are just so rich.

Anyway, I figured doing the course would make me stick with Processing long enough to maybe get over my discarding it hump. Sure enough I've already learnt a couple of new tricks I didn't know before.

Week 1

CATT

Week 1 was mainly setting up Processing and playing with some pre-built sketches. This bit was easy for me, but I could see parts of it was a struggle for people who've never coded. If you're really starting from scratch then there's probably easier ways to get introduced to coding concepts.

Week 2

animation 1

Saw the introduction of sin() and cos() methods for doing stuff with angles. I've always loved those functions because if you control the movement of everything based on modifications of those you can get random-ish looking results, that always loop. If you're working with degrees, then after 360 frames everything will be back where it started. By moving through the degree in steps of 2, 3, 4, 6 and so on brings a loops down to 180, 120, 90, 60 etc frames.

animation 2

I did learn a couple of new methods though which I didn't know before.

lerpColor which is super handy and tells you the colour that falls between two other colours. You can say "Give me the value 2/3rds of the way between this red and that blue", which is all kinds of pain to code yourself.

The other is map converting how far we are between two values into two different values. This is like "If my current distance is 45 and that's between a minimum distance of 0 and maximum of 160, and I want to draw a square with a transparency value between 0-255 that represents that how far 45 is between 0 and 160, what value should I use?"

(45 is 28.75% of 160, so an transparency value of 73 should be used, as it's 28.75% of the way from 0 to 255).

That's fairly straight forwards, but becomes super useful then the original range is things like between -23 and 104, and the output is going from 2048 down to 1024. It handles all that.

This along with a super easy saveFrame("frame_####.png") command, and stuff like dist & frameCount is what makes it so great for artists, annoying maths stuff that you totally want to use all the time made easy.

Week 3

Zachary

This week is the were I started to get to sink my teeth into things. Getting into an easy "win" with drawing lines roughly based on a source image...

Zachary gif

I always think of these as a bit of cheat, as they are often labeled "generative", which I've always though of as though the code generates the final image. And when you base colours and what have you on a source image, then just leaving the code to run long enough will pretty much result in something looking like the image. Thus not magically "generating" the image, but rather copying pixels from one place to another with style.

However turns out "generative" means that process of copying pixels that generates the final image, rather than directly generating the final image.

I think.

Short version, I still think it's a cheat way of creating an image, but I'm no so cross at the term now.

I then got to faff around with colours more, using my new found love of lerpColor and map...

lines 1

lines 2

lines 3

lines 4

At the weekend I then had a play with taking the code written for the above into the 3rd dimension...

...which had me finally getting down to learning how to add video to Audition so I could watch it while recording some audio. How to adjust the levels of the colours of the video in Photoshop, and then sticking the audio and video together in Premier.

Not so much about Processing that, but something I've wanted to practice but haven't really had the need. Nice that I know how to do it now.

I'm still not totally sold on using Processing, but as I use it more over the weeks I can see where it could fit into a workflow a bit better.

Now onto Week 4, and keeping better notes.


Galileo's best pictures of Jupiter's ringmoons by The Planetary Society

People often ask me to produce one of my scale-comparison montages featuring the small moons of the outer solar system. I'd love to do that, but Galileo's best images of Jupiter's ringmoons lack detail compared to Cassini's images from Saturn.


August 24, 2015

"I walked into that crowd again and I lost myself..." by Feeling Listless

Music Natalie Imbruglia's back then. After the debacle of the deeply average Come to Life, which still hasn't had an official release in the UK and so is missing from Spotify (was on there for a week then pulled), its emergence borked by the singer's duties as a judge on Oz X-Factor.

Now she's released a pretty good cd of covers of the men's songs including Friday I'm Love and Let My Love Open The Door. It's not Tori's Strange Little Girls (few things are) (well perhaps ScarJo's Anywhere I Lay My Head) but if this BBC Breakfast interview is an indication it has had the effect of reigniting her inspiration:

"Natalie Imbruglia started her career in the Australian soap Neighbours, but when she made the switch to music she picked up a Brit Award, several Grammy nominations and sold 10 million albums worldwide.

After six years away from the music scene, Natalie's back with a new album of cover songs which all have one thing in common - they were originally performed by men.

Natalie told BBC Breakfast why she took a break from music, and how it feels to be singing again."
For new readers, here's some of my previous with Imbruglia from back in 2004 writing about one of the best pop songs of all time.

Here's MEN on Spotify:


On the common fallacy of hypothesis driven business. by Simon Wardley

TL;DR Look before you leap.

There's a lot wrong with the world of software engineering but then again, there's always been a lot wrong with it. Much of this can be traced back to the one size fits all mentality that has pervaded our industry - be agile, be lean, be six sigma, outsource

However, there is a universal one size fits all which actually works. It's called look before you leap or in other words observe the environment before you decide to take any action. In the case of OODA loops there's even two whole steps of orientate and decide before you get from observe to the action bit. Alas in many companies action seems to be the default. Our strategy is delivery! Delivering what exactly? Who cares, deliver!

Your organisation, your line of business and even a discrete system consists of many components. All of those components are evolving through supply and demand competition from the uncharted space of the uncertain, unknown, chaotic, emerging, changing and rare to become industrialised over time. The industrialised have polar opposite characteristics to the uncharted something we've known about since Salaman & Storey's Innovation Paradox of 2002. If you want to accelerate the speed at which you operate and create new things then you have to break down complex systems into stable components and treat those components appropriately

So, how do you manage this? Well, since most companies fail to observe the environment then they will resort to the only thing possible which is backward causality or meme copying - "Everyone else is doing this thing, so lets adopt DevOps, Agile, Lean, Digital, Cloud, APIs, Ecosystems, Open Source, Microservices" and on and on. Each of these approaches have certain benefits if used in the right context but in most cases, the context is missing. Furthermore in today's software world various claims are given to being more scientific, to being driven by hypothesis but many of these ideas are misguided without context.

To understand why, we need to explore the game of chess. A key part of the game of chess is understanding the board i.e. where the pieces are (position) and where they can move to (movement). You don't actually have to physically see the board if you're good enough. You can create a mental model of the board and play the game in your mind. But the board is there, it's an essential element of the game. Though each game may be different, you can learn from each game and use these lessons in future games. This is because you can understand the context at hand (the position of pieces and where they can move) and can learn consequences from the actions you take. You can apply such lessons to future contexts. This is in fact how we learn how to play chess and why practice is so important.

Now, imagine you have no concept of the board but instead all you see is a range of computer characters on the screen (see figure 1). Yes, you can play the game by pressing the characters but you have no understanding of position or movement. Yes, over time you can grab the sequences of thousands of games and look for secrets of success in all those presses e.g. press pawn, pawn, queen, queen tends to win. You will by nature tend to copy other successful players (who also have no context) and in a world dominated by such chess play then memes will prevail - expect books on the "power of the rook". Action (i.e. pressing the key) will dominate, there is little to observe other than the sequence of actions (previous presses) and all these players exist in a low level situational awareness environment.

Figure 1 - Chess with low levels of situational awareness.


If you ever come up against a player who can see the context (i.e. the board) then two things will happen. First, you will lose rapidly despite having access to thousands of games containing millions of data points from sequences of action. Secondly you'll be bewildered. You'll start to grab for the spurious. Naturally, you'll try and copy their moves (you'll lose), you look for all sorts of weird and wonderful connections such as the speed at which they pressed the button (you'll lose), whether they had a good lunch or not (you'll lose) and whether they're a happy person or not (you'll lose). It's like the early days of astronomy where without any understanding we collected all sorts of things such  as whether it was a windy day. Alas you will continue to be utterly outplayed because the opponent has much higher levels of situational awareness and hence they understand the context better than you. To make matters worse, with every game your opponent will actually discover new patterns, new ways of defeating you and they will get better with time. I've tried to show an example of low vs high situational awareness in figure 2.

Figure 2 - low vs high situational awareness.


The player who understands the board will be absorbed by first observing the environment, understanding it (i.e. orientate and decide) and then making a move (i.e. acting). Terms like position and movement will matter in their strategy. Their strategy (the why of action) will be based upon why here over there i.e. why this move over that move. 

Most businesses exist in the low level situational awareness environment described by figure 1. They have no context, they are rife with meme copying, magic sequences and are dominated by action. We already know that this has an impact not only from individual examples but by examination of a range of companies. It turns out that high levels of situational awareness appears to be correlated with positive market cap changes over a seven year period (see figure 3).

Figure 3 - Situational Awareness and Market Cap changes.


So what has this got to do with hypothesis driven business? Hypothesis without context is often akin to saying "If we press the pawn button will it give us success?"

The answer to that question is it might in that context (which you're unaware of) but as the game changes with every move then there is nothing of real value to learn. Without understanding context you cannot learn patterns of play to use from one game to another. To give an example of this, let us examine The Scenario as described in an earlier post. This scenario has all the information you require to create a map and to start learning from previous conflicts and repeating patterns. However, most companies have no idea how to map and hence have no mechanism of past learning through context. 

It is certainly possible without context to create multiple hypotheses for the scenario e.g. expand into Brazil or maybe attempt to differentiate with a new feature? These can be created and tested. Some may well show a short term benefit. However, if you take the same scenario and map it  - as done in the Analysis post - then a very different picture appears. Past and repeatable patterns such as co-evolution, ILC & punctuated equilibriums can be applied and it shows the company is in a pretty miserable state. Whilst a hypothesis around differentiate with a new feature might show some short term benefit and be claimed as successful, we already know it's doomed to fail. The hypothesis therefore appears to be right (short term) but before acting, from the context, we already know it's wrong and potentially fatal (long term). It's the equivalent of knowing that if you move the Queen you might capture a pawn (i.e. success from a hypothesis of pressing the queen button) but at the same time you expose the King to checkmate (from looking at the context, the board).

The mapping technique described is about the equivalent of a Babylonian clay tablet but it's still better than having no map as it provides some idea of context covering position (relative to a user need) and movement (i.e. evolution). There will be better mapping techniques created over time but at this moment, this is the best we have. Many of us over the last decade have developed a sophisticated enough mental model of the environment, principles and repeatable patterns that we can just apply to them to a scenario without mapping it first. In much the same way that if you get really good at playing chess, you don't even have to look at the board. However, most have no understanding of the board, of position, of movement, of context or the numerous repeatable patterns (a subset of these, 61 patterns are provided below in figure 4). 

Figure 4 - An example list of repeatable patterns / gameplays


Without understanding context then most have no mechanisms of anticipation, learning and cannot even use weak signals to refine this. In such cases, you can make an argument that hypothesis driven business is better than nothing at all but it's a very poor substitute for understanding the context. Even if your hypothesis appears to be right, it can be completely the wrong thing to do.

This is the fallacy of hypothesis driven business. Without a mechanism of understanding context then any hypothesis is unlikely to be repeatable as the context will likely change. Yes, you can try and claim it is more scientific (hey, we've pinched a word like hypothesis and we're using data!) but it's the equivalent of saying "If I have a good lunch every day for a month then the heavenly bodies will move!" ... I had a hypothesis, I've eaten well for a month, look those stars moved ... success! Oh dear, oh dear, oh dear. Fortunately, astronomers also built maps.

This doesn't mean there is no role for hypothesis, of course there is! For example it is extremely useful for exploring the uncharted spaces where you have to experiment or for the testing of repeatable patterns or even for refinements such as identifying user needs. But understand the context first, understand where you can attack and then develop your hypothesis. The context is your route to continued learning.

Observe (i.e. look at the context) then Orientate & Decide (i.e. apply thought to that context) then Act (i.e. do stuff in that context). 


An Explosive Signature of Galaxy Collisions by Astrobites

Gamma ray bursts (GRBs) are among the most dramatically explosive events in the universe. They’re often dubbed the largest explosions since the Big Bang (it’s pretty hard to quantify how big the Big Bang was, but suffice it to say it was quite large). There are two classes of GRBs: long-duration and short-duration. Long-duration GRBs (which interest us today) are caused when extremely massive stars go bust.

Fig 1. -

Fig 1. – Long-duration GRBs are thought to form during the deaths of the most massive stars. As the stars run out of fuel (left to right) they star fusing heavier elements together until reaching iron (Fe). Iron doesn’t fuse, and the star can collapse into a black hole. As the material is sucked into the black hole, a powerful jet can burst out into the universe (bottom left), which we would observe as a GRB.

The most massive stars burn through their fuel much faster, and die out much more quickly than smaller stars. Therefore, long-duration GRBs should only be seen in galaxies with a lot of recent star formation. All the massive stars will have already died in a galaxy which isn’t forming new stars. Lots of detailed observations have been required to confirm this connection between GRBs and their host galaxies. It’s, in fact, one of the main pieces of evidence for the massive-star explanation.

The authors of today’s paper studied the host galaxy of a long-duration GRB with an additional goal in mind. Rather than just show that this galaxy is forming lots of stars, they wanted to look at its gas to explain why it’s forming so many stars. So, they went looking for neutral hydrogen gas in the galaxy. Neutral gas is a galaxy’s fuel for forming new stars. Understanding the properties of the gas should tell us about the way in which the galaxy is forming stars.

Hot, ionized hydrogen is easy to observe, because it emits a lot of light in the UV and optical ranges. This ionized hydrogen is found right around young, star-forming regions, and so has been seen in GRB hosts before. But the cold, neutral hydrogen – which makes up most of a galaxy’s gas – is much harder to observe directly. It doesn’t emit much light on its own, but one of the main places it does emit is in the radio band: the 21-cm line. For more information on the physics involved, see this astrobite page, but suffice it to say that pretty much all neutral hydrogen emits weakly at 21 cm.

This signal is weak enough that it hasn’t been detected in the more distant GRB hosts. Today’s authors observed the host galaxy of the closest-yet-observed GRB (980425), which is only 100 million light-years away: about 50 times farther away than the Andromeda galaxy. This is practically just next-door, compared to most GRBs. This close proximity allowed them to make the first ever detection of 21-cm hydrogen emission from a GRB host galaxy.

maps_panel2

Fig. 2 -The radio map (contours) of the neutral hydrogen gas from 21-cm radio observations. The densest portions of the disk align with the location of the GRB explosion (red arrow) and a currently-ongoing burst of star formation (blue arrow). Fig 2 from Arabsalmani et al. 2015

Using powerful radio observations – primarily from the Giant Metrewave Radio Telescope – the authors made maps of hydrogen 21-cm emission across the galaxy. They found a large disk of neutral gas, which was thickest in the region around where the GRB went off. Denser gas leads to more ongoing star formation, which as we know can mean that very massive stars may still be around to become GRBs.

The most important finding, however, was that the gas disk had been disturbed: more than 21% of the gas wasn’t aligned with the disk. This disturbance most likely came from a merger with a smaller galaxy, that mixed up the disk when passing by. The authors argue that this merger could have helped get the star-formation going. By shock-compressing the gas, the disturbance would have kick-started the galaxy into forming stars and, eventually, resulted in the GRB.

This paper is quite impressive, as it shows that astronomers are probing farther into the link between GRBs and their host galaxies. Astronomers have known for a while that GRBs are sign-posts to galaxies which are forming lots of stars. But today’s paper used radio observations of the gas to connect that star formation to a recent merger. Most GRB hosts are much farther away, and similar observations will be difficult. But with more sensitive observatories – like ALMA or the VLA – it may be possible to see whether the gas of more GRB hosts show evidence of mergers. Perhaps GRBs are telling us even more about their galaxies than we had thought before!


August 23, 2015

Bad puppies, no awards by Charlie Stross

I'm still at the worldcon, so too busy to blog regularly; won't be home until the back end of the week.

But for now, if you want to know what the sound and fury over the Hugo awards was all about, you could do worse than read this WIRED article, Who Won Science Fiction's Hugo Awards and why it Matters (which gives a pretty good view of the social media context), and if you're a glutton for punishment File 770 has kept track of everything (warning: over a million words of reportage on the whole debacle).

Also, props to George R. R. Martin for talking sense, keeping a level head while everyone was running around shrieking with their hair or beard (sometimes both) on fire), and for salving the burn of injustice with the Alfie awards at his memorable after-party.

I've been seeing a lot of disbelief and anger among the puppies (and gamergaters—there seems to be about a 90% overlap) on twitter in the past 12 hours. They didn't seem to realize that "No Award" was always an option on the Hugos. They packed the shortlists with their candidates but didn't understand that the actual voters (a much larger cohort than the folks who nominate works earlier in the year) are free to say "all of these things suck: we're not having any of it". By analogy, imagine if members of the Tea Party packed the US republican party primary with their candidates, forcing a choice between Tea Party candidate A and Tea Party candidate B on the Republican party, so that the Republicans run a Tea Party candidate for president. Pretty neat, huh? Until, that is, the broader electorate go into the voting booth and say "no way!"

They packed the primary. The voters expressed their opinion. The problem is, the Hugos aren't an election, they're a beauty pageant. And my heart goes out to those folks who found themselves named on a puppy slate and withdrew from the nomination (such as Annie Bellet and Marko Kloos), those who were on a slate but didn't know what was going on and so lost to "no award", and to those folks who would have been on the Hugo shortlist this year if not for a bunch of dipshits who decided that only people they approved of should be allowed to compete in the beauty pageant.


New Doctor Who Season 9 Trailer!!! by Feeling Listless



TV ... which is a thirty second version of the last one. Sorry.  On the upside, this is a rare post with high SEO potential about a show with plenty of CSO.  On the upside, Pertwee's People exists now too.


August 22, 2015

Talk Hard. by Feeling Listless

Film The AV Club on Pump Up The Volume:

"The movie originally came from a script called Radio Death,” he explained to The A.V. Club. “It was the story about a guy who was planning to commit suicide on the air, but was having so much fun announcing it and discussing different ways with which he could off himself. But every night he would have his suicide on his mind and he’d go on-air and say, ‘Stay tuned, because this night could be your lucky night. I might kill myself on the air.’ And then that became his whole thing. So I wrote the script about this much darker guy than Happy Harry Hard-On wound up being."
Somewhere I have an old cassette on which I missed all of Christian Slater's DJ material from the film with music from the soundtrack which with the old tape to tape system sounded just like I'd nabbed it from a dodgy broadcast from pirate radio.


Rev Dan Catt Experimental Audio Diary - Episode 11 by Dan Catt

Tada, time once more for a new RDCXAD podcast. Each time I do this I promise I'll actually write about why I'm doing these. In the meantime there should be a SoundCloud embed below...

...if that doesn't work...

SoundCloud: Rev Dan Catt Experimental Audio Diary - Episode 11
iTunes: RDCXAD
Direct RSS feed: http://rdcxad.revdancatt.com/feed/


[Updated] The United States of Space Advocates by The Planetary Society

See which states have the highest number of space advocates writing Congress and the White House to support planetary exploration.


Pretty Pictures of the Cosmos: Long Exposures by The Planetary Society

Astrophotographer Adam Block brings us images showcasing beauty in details requiring extended exposures to capture.


August 21, 2015

Extracting the BBC Genome: This American Life. by Feeling Listless



Radio Although Radio Four Extra rebroadcast some classic episodes of TAL recently, there are a couple of mentions of the show and it's prehistory in the BBC Genome.

In 1993, two years before TAL began in its original form as Your Radio Playhouse, Ira Glass was the producer on the first episode of a series, Your Place of Mine?, "a collaboration between documentary makers from five countries. Over the next ten weeks, programmes from Australia, America, Canada, Ireland and Britain. Stories which cross boundaries - of geography and generation." The synopsis of the episode is pure TAL:

"1. Big Sisters. "Whatever the guys do we can do better. "On the streets of Chicago the girl gangs rule the patch. They are rough and tough, seeking power, friendship and "family".
The episode was co-produced with NPR but their audio archive "only" goes back to 2001 and I can't find another trace of it.

Then in 1998, the Postscript strand on Radio 3 ran a series called "This American Life" in which "Ian Peacock attempts to understand America through its self-image on radio and television" and in episode 3, Niagra Falls:
"From a rain-swept pier on Lake Michigan, award-winning broadcaster Ira Glass attempts to decode America on his weekly national programme. Recently, he has covered every possible American concept, from Canadians to wackiness and the cult of Frank Sinatra. He, of all people, must have an overview of what an American really is."
The strand has since been discontinued and I can't find audio of this either but here are the three TAL episodes referred to in that synopsis:

"Canadians"

"Wackiness":

"the cult of Frank Sinatra"


Chilling Effects by Charlie Stross

When Charlie offered a guest blogging slot, I didn't plan on writing a women-in-science-fiction post. It's not a subject I address very often. As some who commented on Judith's post have mentioned, the issue is complicated--more so now, I think, than when I started writing back in the 80s.

Back then, it never occurred to me to use a man's name. It never occurred to me I couldn't succeed as a woman writing the hard stuff. Of course I knew that any kind of success was a long shot--writing is a tough gig--but I didn't see my name as a liability that could hold me back.

But after six US-published hard SF novels (only one UK-published), I finally started to wonder if I'd been a bit naïve. My work had convinced agents, editors, and reviewers. It won a couple of awards. But outside of a small, albeit devoted, readership, my novels remained invisible to most SF fans. My sixth novel, Memory, is the one in the Women-in-SciFi Storybundle; it was a finalist for the John W. Campbell Memorial Award. And it was my last novel for a long time. I just didn't see the point of writing another, so I stopped. Hey, sometimes books and authors just don't hit, right?

But many years later I was told that I was used as an example of why it is unwise to write hard SF under a woman's name. I have to imagine that a warning like that must be a very discouraging thing for a young and ambitious woman writer to hear. And that's what I want to talk about today: the chilling effect of all the negative statistics regarding women and science fiction, particularly high-tech, hard SF.

Judith has addressed the past and present of the genre. I want to consider the future.

In the current climate, any logical young woman, no matter how predisposed she might be to write at the technically plausible end of the SF field, will surely pause to seriously consider the wisdom of a foray into high-tech/hard SF--and if she decides to focus her talents elsewhere, what will we have missed?

I can already hear the objections. After all, if someone isn't interested enough in our genre to take some risks, why worry about it? There are plenty of other writers out there.

But I do worry about it, because the best writers can write in any field they choose, and if they don't choose our field, that's potentially great work that will never exist.

Don't think the risk is real? Consider this. After a ten-year hiatus, I finally returned to writing. I was shocked to learn that, a decade into the 21st century, women were still using pseudonymous names to sell SF. One writer related how an editor had told her bluntly that hard SF would not sell under a woman's name. And social media is full of negative statements and stories--enough to convince any sensible woman to be wary of science fiction, and of hard SF in particular. This is my field, and I was wary of it!

The climate had gotten so bad that by the time I got around to writing a new hard SF novel--more precisely, a near-future, high-tech military novel written in first person from a male point of view, because why not max out the degree of difficulty?--I had no doubt I was making an illogical choice. I knew this was not what I should be writing if the goal was to further my career and grow my audience.

I wrote the novel anyway, because I needed to write it. And then I self-published it, because I didn't want to deal with what I perceived as the negative environment of traditional publishing. Two years ago I was here on Charlie's blog with a guest post about that decision. Since then, The Red: First Light was nominated for both the Nebula and John W. Campbell Memorial awards, and went on to sell to Simon & Schuster's new SFF imprint, Saga Press, with the sequel, The Trials, just out. It would be easy to say my fears were baseless and it all turned out okay--but how many of you have actually heard of these books, or read them?

And more to the point, how many talented women more sensible than me will decide not to bother writing a hard SF novel after considering the statistics and realizing that the odds of success are so very slim?

I want to see our genre thrive. I want to see amazing work that addresses our world in a way that is exciting, thought provoking, meaningful. I want our genre to welcome terrific new writers, not frighten them off. And a lot of those terrific new writers could be women and they could bring women readers with them, growing our genre to the benefit of everyone--but that's only going to happen if we can figure out a way to change both the climate and the reputation surrounding science fiction.

The easiest first step is for fans of the genre to seek out both books and writers new to them. It's never been easier to sample a new book. With ebooks you can usually read at least ten percent for free, and that's generally enough to know if the book is for you. And if you do find books you like? Talk about them. Especially for lesser-known writers, word of mouth is the best promotion there is. Also consider giving those books you like a positive review at online vendors. And the next time there's a request for writer recommendations, volunteer the names of women writers whose work you've enjoyed, as several commenters have already done on Judith's post. No more invisible ink. Right?


The Ashley Madison Hack: A Glimpse of the Post Privacy Future by Albert Wenger

After some initial uncertainty it appears that the data released yesterday is in fact from the Ashley Madison hack that had been announced a little while back. The data contains not just email addresses but a lot of other information including internal corporate documents. As for the email addresses there will be some percentage that was entered by others as Ashley Madison did not require an email confirmation (so I would take any revelations with a grain of salt).

I believe that this hack and subsequent data leak provides a glimpse of a post privacy future. As I have argued before here on Continuations it is not ultimately possible to protect data and what we should be focused on instead is protecting people. Whether en masse, as in this case, or one person at a time, data will continue to come out. We need to work towards a society and individual behaviors that acknowledge this fact and if anything err on the side of more transparency and disclosure.

People have always had affairs. There is nothing new about that. People have also used technology as part of their affairs. For instance, when letters were the technology of the day people wrote letters to their lovers, which then occasionally were discovered. That’s for instance how Eleanor Roosevelt found out about FDR’s affair with Lucy Mercer. So it shouldn’t be at all surprising that people have been using the internet to have affairs. Facebook is apparently cited in one third of divorce cases.

The way forward here is not to pretend that there is a technological solution or to be sanctimonious about affairs. Instead what we need is to acknowledge that affairs are part of human behavior. There is lots of reason to believe that humans aren’t naturally monogamous. If you want a great read on this topic, I highly recommend “Sex at Dawn: How We Mate, Why We Stray, and What It Means for Modern Relationships.”

While it will be quite painful in the shortrun for many of the individuals whose affairs were or will be exposed as part of this hack, I think something good can come of it — a more measured view of affairs. Here in the US we have become less, rather than more, accepting of affairs over the past decades. Today public ridicule and divorce are common responses to the discovery of an affair. Maybe once we realize just how widespread and likely deeply evolutionary rooted affairs are we can become more forgiving and understanding.

This is why I will continue to advocate for a post privacy world. Secrets in the end wind up causing more damage than they were originally supposed to avoid. And as we learning every day now, eventually they all come out. A future of honesty about personal matters and transparency of government affairs is far preferable to any attempt of doubling down on technological or regulatory attempts to keep secrets.


Magnetars: The Perpetrators of (Nearly) Everything by Astrobites

Fig 1:  Artist's conception of a magnetar with strong magnetic field lines. [From Wikipedia Commons]

Fig 1: Artist’s conception of a magnetar with strong magnetic field lines. [From Wikipedia Commons]

Astronomers who study cosmic explosions have a running joke: anything too wild to explain with standard models are probably magnetars. These scapegoats are neutron stars with extremely powerful magnetic fields, like the one shown to the right.

Super-luminous supernovae? Probably a magnetar collapsing. Short, weak gamma ray bursts? Why not magnetar flares. Ultra-long gamma ray bursts? Gotta be magnetars. Magnetars are a popular model due to their natural versatility. In today’s paper, the authors tie together several magnetar theories into a cohesive theoretical explanation of two types of transients, or short cosmic events: super-luminous supernovae (SLSNe) and ultra-long gamma ray bursts (ULGRBs).

The Super-Ultra Transients

Super-luminous supernovae, as their name suggests, are extreme stellar deaths which are about 100 times brighter than normal core-collapse supernovae. The brightest SLSN, ASASSN-15lh (pronounced “Assassin 15lh”), is especially troubling for scientists because it lies well above the previously predicted energy limits of a magnetar model*. The other new-kids-on-the-transient-block are ultra-long gamma ray bursts which are bursts of gamma-ray energy which last a few thousands of seconds. The other popular variety of gamma ray bursts associated with SNe are plain-old “long gamma ray bursts”, which last less than 100 seconds are often accompanied by a core-collapse supernovae. Both long gamma ray bursts and ULGRBs are currently predicted in magnetar models. The question is: can we tie these two extreme events, SLSNe and ULGRBs, together in a cohesive theoretical framework?

The authors say yes! The basic theoretical idea proposed is that a very massive star will begin to collapse like a standard core-collapse supernova. The implosion briefly leaves behind a twirling pulsar whose angular momentum is saving it from collapsing into a black hole. Material is flung from its spinning surface, especially along its magnetic poles. From these poles, columned material is seen as high-energy jets, like you can see in this video of Vela. Eventually, the magnetars slow down and finally collapse into a black hole.

Fig 2: The connection between ULGRBs and SLSNe. Along the x-axis is is the initial spin-period of the magnetar. On the y-axis is the magnetic field of the magnetar. The red-shaded region shows where SLSNe are possible, and the blue-shaded region show where GRBs are possible. The red and green points are observed SLSNe.

Fig 2: The connection between ULGRBs and SLSNe. Along the x-axis is is the initial spin-period of the magnetar. On the y-axis is the magnetic field of the magnetar. The red-shaded region shows where SLSNe are possible, and the blue-shaded region show where GRBs are possible. The red and green points are observed SLSNe.

Connecting the Dots

We can explain the consequences of the model using the image shown above. In the upper left quadrant, the magnetars spin very quickly (i.e. short spin periods) and have large magnetic fields. In this scenario, the escaping magnetar jets are extremely powerful and columnated, and the magnetar will spin down and collapse into a black hole after a few minutes. This scenario describes the typical long gamma ray bursts that we often see.

Now if we move down and right on the figure, our initial magnetic field weakens and the period of the magnetar grows. In this case, the expected jet from the magnetar will weaken, but it will last longer as the magnetar takes a longer time to slow down its life-preserving spin. If the jet is able to blast its way out of the magnetar and is directed towards us, we will see it as an ULGRB, with a lifetime of about a half hour!

One of the most exciting features of the plot are the solid black lines that show where the supernova luminosity is maximized. In these points, the luminosity of the supernova is enhanced by the magnetar, leading to super luminous supernovae. These lines are in great agreement with three notable, luminous SNe.  It’s especially exciting that the black contours overlap with the region where ultra-long GRBs are produced. In other words, the authors predict that it is possible for a super-luminous supernova to be associated with an ultra-long gamma ray burst, tying together these extreme phenomena.

What’s Next?

One of the best tests of this theory will come from observations of ASASSN-15lh over the next several months. Ionizing photons from a magnetar model are predicted to escape from the supernova’s ejected material in the form of X-rays. Observations of these X-ray photons could be a smoking gun for a magnetar model of super-luminous supernovae, so stay tuned!

*Note: At the time of this bite, ASASSN-15lh is not a confirmed supernova. It may be another exciting type of transient known as a tidal disruption event, which you can read all about here


An August Moment to Check in on NASA’s Budget and Future by The Planetary Society

It’s August. Congress is out of session. Things are quiet. It’s as good a time as any to check in on several issues we’ve been following here at the Society, particularly with NASA’s budget prospects for the year and the future of human spaceflight policy.


August 20, 2015

iPlayer Update. by Feeling Listless

TV Let us return briefly to the iPlayer shenanigans of the past and specifically the malfunctions in the Roku 3 app.

The short version is that they've been fixed. The black bars have gone and the Roku app's working better than it ever has, no need to mess about with the interface between the iPad version and the Chromecast.

That's (1) sorted. None of the other items I addressed in that post - at least the suggestion portions have yet - but the app is being improved with tricks I didn't mention and has already.

Now it's possible to begin watching a programme from the start even if its in the process of being broadcast, which means I'll now be able to toilet properly between the end of the Agents of Shield and Have I Got New For You on a Friday night. Many is the evening when I've feared laughter in case of accidents.y PVR to hand.

In the coming weeks the ability to transfer favourites between devices with keyboards and screens and devices that are just screens will also be added so we'll be able to add potential viewing experiences on a computer and have them appear on a television version which is immeasurably exciting.  Replicating favourite in various places has become especially tedious.


How to automatically locate a model from probe points by Goatchurch

The limitations of the scipy.optimize.minimize() function has now become apparent. They should have called it localminimize() starting from an “initial guess”.

This follows on from the mess I made out of using this same function to calculate the circumcircle of a triangle.

Here I begin with a STL file of a widget which was then probed from 20 random directions to a distance (ball radius) of 5mm.

widgetprobe2

This was done using a my barmesh library that I ought to start getting back into as I haven’t touched it since I got distracted by all this arduino electronics.

The barmesh code itself is impenetrable when I looked at it recently, but use of its features is still possible.

For example, to find the point along a line segment between probefrompoint and probetopoint that is exactly proberad units from the stl model, use the function Cutpos() in the following way.

tbm = mainfunctions.TBMfromSTL("stlmodel.stl") # Triangle Barmesh
ibo = implicitareaballoffset.ImplicitAreaBallOffset(tbm)
probefrompoint, probetopoint = P3(100,100,100), P3(0,0,0)
proberad = 5.0
probevector = probetopoint - probefrompoint
lam = ibo.Cutpos(probefrompoint, probevector, None, proberad)
assert lam != 2, "probe vector failed to intersect"
probepoint = probefrompoint + probevector*lam

You can then use DistP() to verify that the points chosen are within proberad units of the model. The PointZone object also has a vector pointing to the closest point, which are drawn in red in the image above.

pz = PointZone(0, proberad+1, None)  # object to manage searching
ibo.DistP(pz, probepoint)
print("Distance from point to stl model is", pz.r, "which is close to", proberad)

It all runs pretty slowly because it’s in Python and I’ve not paid much attention to the speed. The structures are prepared for a more efficient implementation, but it won’t be done until there’s a good reason to finish it.

Statement of problem
Suppose we bolt down the STL part on the bed of a coordinate measuring machine in an unknown orientation at an unknown position. The unknowns take the form of 6 numbers, 3 for the rotation and 3 for the translation, which we encode into the vector X.

The coordinate measuring machine probes the shape at N different places with a spherical probe of radius proberad. This produces a list probepoints of N points all of which are the distance proberad from the model transform(stlmodel, X).

Modulo any symmetry in the stlmodel, there is only one solution of X satisfying:
—> sum( (distance(transform(stlmodel, X), p) – proberad)**2 for p in probepoints ) = 0

We should therefore be able to solve for X, and thus find the exact orientation of the model in the machine from the probing data. We would be able to use this knowledge to drive any hole drilling or edge trimming adjustments to the part.

As said, I tried to use the scipy.optimize.minimize() function to find the 6-dimensional vector X and it failed completely unless the “initial guess” was closed to the right answer.

So I simplified the problem by locking down the rotation component leaving only the translation vector.

This usually gets to a solution, but it takes ages from so many slow evaluations. I’m just doing it like this:

def fun2(x):
    v = P3(x[0], x[1], x[2])
    ds = [ MeasureDistanceToModel(p + v)  for p in probepoints ]
    return sum((d-proberad)**2  for d in ds)
g = scipy.optimize.minimize(fun=fun2, x0=[0,0,0])

You can simulate moving the model to different places by setting different initial values of x0. It usually gets to the right answer, because points that are far away from the surface apply a disproportional pull due to the square sum, while points inside the model attracted to the wrong surface are going to make a small difference.

There will be a very much faster way to iterate down to the correct translational value by using the vectors pointing to the closest approach to direct the pull. The key is that you know you’ve got the result when g.fun approaches zero.

What if we allow for one degree of freedom in the rotation about the z-axis? This could be useful if you know you’ve clamped the part down onto the flat bed, but you don’t exactly know where or how it is aligned.

I’ve simulated this in the following way:

def FindClosestMatchRotDistance(ang):
    theta = math.radians(ang)
    st, ct = math.sin(theta), math.cos(theta)
    rprobepoints = [ P3(p.x*ct+p.y*st, p.y*ct-p.x*st, p.z)  for p in probepoints ]
    def fun2(x):
        v = P3(x[0], x[1], x[2])
        ds = [ MeasureDistanceToModel(p + v)  for p in rprobepoints ]
        return sum((d-proberad)**2  for d in ds)
    g2 = scipy.optimize.minimize(fun2, [0,0,0], method='Powell')
    return float(g2.fun)

As you would expect FindClosestMatchRotDistance(ang) is approximately equal to zero when ang=0.

Here’s what happens when I graph this against different values of ang:

widgetproberot

As hoped, the only place where the answer goes to zero is at ang=0 or 360. Any starting point between +-30 degrees is likely to fall into this correct minimal value if we applied scipy.optimize.minimize().

But there is a huge local minima around the ang=180 mark, which is easy to predict from the symmetry of the example and the number of probe points on the hump in the middle.

There are a couple of other minimas at around the 90 degree and 270 degree marks, because a rectangular block will fit itself at these orientations slightly better than at one of the cockeyed 45 degree orientations — especially when the two sides of the rectangle are similar in length.

(Don’t worry that the graph not symmetric about the ang=0 line in spite of the original model being perfect mirror image of itself. This is because the set of probepoints is random and not symmetric.)

How to solve this problem?

There are two ways.

Either you get the user to estimate the orientation of the part to within 30 degrees, or we have to basically run the minimize() function at 30 degree rotation intervals until it finds a minima that’s actually zero. That is, we scan the solution space at an interval that is narrower than the valley surrounding the correct answer in the expectation that we land in that valley.

All of this can be highly optimized, parallelized and whatever. I would not expect the finished product to depend on the general-purpose scipy.optimize.minimize() function because we can pool the evaluations of FindClosestMatchRotDistance() and terminate it early once we have determined that whatever minimas that would lie in a particular search range cannot be zero.

Also, you can operate on subsets of the probe samples to discount no-hope search zones quickly and very early on.

For example, if we repeat the above analysis for the first 3 probe points, everything computes faster and we get the graph below which rules out about 35% of the solution space straight away.

widgetproberot2

The next trick is to find out if anyone wants this geometric function and whether there is a way of supplying it were it to be written.

I would hope to distribute it self-contained and on the end of a pipe like we did with our Adaptive Clearing processes for parallelism, and have the geometry and points serialized in, and the results serialized back out so that nobody has to be concerned that it’s implemented in Python or in some crazy hybrid Cythonized derivation.

The full-on solution would be to hand live control of the coordinate measuring machine to this application so that it can start its computation as the probe positions are received, and recommend further probe positions based on what would make the biggest difference to the solution space.

It would be a shame if, due to the lack of systems thinking in the industry forced by the requirements of the suppliers to make profits from limited general-purpose software, combined with the absence of sophisticated demands by the customers, we are left with a solution that depends on sampling with the probe on a plane grid array and then batch processing the values elsewhere.

Even then it’s still poking the air blindly and it ought to be receiving still images of the part so that it can estimate its silhouette and instantly narrow down the search volume to what matters.

Paint the clamps bright green. The computer must know what shapes they are. When you finally see that happening in the industrial world you’ll know that they’ve finally gotten around to using object visual recognition technology 20 years after the hardware in terms of digital imagery became undeniably available.


Dot Plan by Dan Catt

.plan

On the quick stupid things front I've streamlined updating my .plan file to be able to do it from within Slack, using a Slack Bot thing of course.

Way way back in the day when you'd log into your work or university's mainframe from a terminal you could use the who command to see who else was logged into the same system as you. The command would generally list the active users, how long they'd been active (or inactive) and which terminal number they were logged in at. If you got real fancy you'd have a script that drew an ASCII location map of terminals so you'd be able to track down whoever you needed to change the ribbon on the dot-matrix printer (again).

This is all fine until you need to talk to someone in another department who's is using a different server, and you're not sure if they're around, busy or not. Unless you could log onto the server they were using to run who you'd have to use a phone... or even walk over to a different building and actually look for them.

Yuck.

The finger program was created to look up people logged into different remote systems, to quote Wikipedia...

Lee Earnest named his program after the idea that people would run their fingers down the who list to find what they were looking for.[1]

The term "finger" had, in the 1970s, a connotation of "is a snitch": this made "finger" a good reminder/mnemonic to the semantic of the UNIX finger command

...I still can't bring myself to speak the command out loud in polite conversation.

You could run the command finger @serverX.someuniversity.edu to see who was currently logged into that server, and finger username@serverX.someuniversity.edu to get more information about a specific user. Very much an early form of "presence information", a way of being aware of other people on remote systems.

This is roughly when I joined the world of networked computers, the times of FTP, Archie, Gopher, Jughead & Veronica, ytalk and yes finger & .plan files.

The .plan file was a simple text file that lived in a user's ~home directory on the unix system which the user could update with their latest activity. This along with the contents of the .project file would be included with the usual response to a finger request. By keeping your .plan file dated people could find out what you were up-to, in turn you could keep track of them, even if they lived half-way around the world in a different timezone.

Of course this ended up extended beyond humans. Famously students at Carnegie Mellon University (CMU) got tired of walking down to the end of the corridor only to find out the Coke machine was out of Coke. They wired it up-to the network where it kept it's own .plan file updated with Coke can levels and a quick finger coke@cs.cmu.edu would tell you if it was worth going to fetch one or if disappointment lay ahead.

You could check the Coke levels from anywhere in the world, and people did. Pretty sure it was my own first interaction with a connected machine.

Soon more Coke Machines around the world joined in, and (kinda) a coffee pot.

Plan files also briefly turned into a hybrid status/blogging system, popularised by John Carmack at Id Software around 1999-2003. He and other members of the team would put progress updates for the game Quake and general chit-chat into their .plan files. Users and websites would keep track of updates with the finger command, here's a great example of Carmack's protoblogging...

http://bluesnews.com/cgi-bin/finger.pl?id=1&time=19991103041516

...awww yeah, check out that hot cgi-bin action.

Coming back round to me...

.plan

Plan files were good, in the same way as livejournal, web rings and email were good. Tools that got things done... you wanted to find out what I was up-to? Well you had a way of doing that. But you had to put some effort into it.

Things are much better now, with twitter you don't have to go to any effort, what I've just been doing pops right up in real-time. No-one is going to finger revdancatt@revdancatt.com just to find out what I had for breakfast, but with twitter... well yay, you just know.

So smooth, frictionless.

But I can see myself wanting to be part of Twitter less and less. I've taken my photos off Flickr and self host 'cause of 'The Fear', I already deleted Facebook way back because I wasn't keen on how they operated. Reddit I never did have an account, I'm sure most of it is lovely, but I don't want to be part of something that doesn't do anything much about the parts that aren't. Twitter feels like it's going in that direction.

I need something that fits in somewhere between twitter and blogging (I guess "microblogging" fits the bill), something that I can easily control and I've always had a soft spot for the old dot plan file. I've been updating my .plan & .project files on and off for a while now. More, I have an archive of the old ones. One problem, SSHing into my server, popping open vi and editing them...

So unsmooth, unfrictionless.

I guess that's should just be 'friction' huh, anyway, it wasn't happening because it was a PITA. But now, one evening's hacking later I can update my .plan & .project files from Slack. The path away from Twitter is clear. Not a single person will ever find out what I'm doing using the finger command (hello ~tilde.club), but it's not for that, it's for me 10 years into the future.

I'm becoming a digital hermit, I keep my own writing, photos, status updates (and soon video and audio) on my own website, where soon finger, gopher and RSS will be the only way to find me.

It's strangely refreshing.

Although ironically I'm going to be tweeting about this being posted to Medium. Because I'm an idiot who still deep down wants an audience.


Missing Spirals and Forming Planets by Astrobites

Title: On Planet Formation in HL Tau
Authors: Giovanni Dipierro, Daniel Price, Guillaume Laibe, Kieran Hirsh, Alice Cerioli, Giuseppe Lodato
First author’s institution: University of Milan
Status: Accepted by MNRAS Letters

 A Stunning Image

Astronomers everywhere (or at least in my office) let out a collective gasp when the ALMA observatory released an image of the protoplanetary disk around the forming star HL Tau last November. The image (Figure 1) was one of the first taken with the ALMA antennas at their largest separation of up to 15 kilometers. This large separation results in a sharper image, revealing a disk cut by concentric gaps around HL Tau. Gaps such as these have been predicted by theories of planet formation, where planets grow out of the disk and clear their orbits of material. Usually such disks and gaps are shown as a cartoon in a textbook, but this is the first time that these gaps have been directly imaged:

HL Tau: Birth of Planets Revealed in Astonishing Detail

Figure 1. ALMA image of the disk around HL Tau, at a wavelength of 1.3 mm. The origin of the concentric gaps is discussed below. Credit: ALMA (NRAO/ESO/NAOJ); C. Brogan, B. Saxton (NRAO/AUI/NSF)

After the surprise wore off, it was time to explain how those gaps were formed. Ben’s recent astrobite showed that the gaps are located where various molecules turn into ices and help planet formation get started. Today’s paper addresses another puzzle in this image: why are there no spirals? Spiral structure is seen in simulations of planet formation where gaps form, so the authors of this paper set out to see if they could reproduce the concentric rings in the image using a hydrodynamic simulation of the system.

Spirals

Figure 2. Simulation of a gas disk with planets. Spiral structure is ubiquitous in simulations of planets (seen as the three bright dots) embedded in gas disks. The spiral waves come from the gravitational pull of the planets on the surrounding gas.

Simulating the Disk

To find out if planets produce the observed structure of the disk, the authors conducted hydrodynamic simulations of the system. The ALMA observation shows emission from dust grains in the disk, so the authors simulated both dust and gas components. The authors adopt a simple power-law for the initial density profile of both dust and gas.

To simulate the presence of planets, the authors placed three particles in orbit at the location of the three most prominent gaps. The masses of these planets play a large role in how gaps are carved in the disk, so they experimented with different masses until a set was found that matched the ALMA image best.

Dust Life is a Drag

As dust grains move through the disk, they feel a drag from the surrounding gas, which can cause the dust component to develop a different structure from the gas. This drag force is more noticeable as the dust grains increase in size. So the authors ran six different simulations of the disk using six different dust grain sizes, from 1 micron to 10 cm. Figure 3 shows the effect of the dust grain size on the evolution of the dust surface density.

6DustSizes

Figure 3. Surface density of dust in six simulation runs with different dust grain sizes, indicated in the lower left. The ALMA image is dominated by emission from mm-size dust grains. These grains show similar structure to the observed disk, while smaller grains are tightly bound to the gas and show spiral waves.

The ALMA image was taken at a wavelength of 1.3 mm, where the emission is dominated by the mm-size dust grains. The distribution of these grains shows clear concentric gaps and no spiral structure! It appears that planets can shape the dust into the observed configuration, while the gas disk may still show the textbook spiral density waves. Crucially, the dust and gas must both be simulated to realistically represent the protoplanetary environment.

Comparing Simulation and Image

To compare the results of their simulations directly to the ALMA image, the authors made mock observations of the simulations that match the specifications of ALMA. The results of these mock observations are shown in Figure 4. The mock is remarkably similar to the real image. The simulated scenario of planets carving gaps in the dust is a good bet to explain what we are actually seeing.

Screen Shot 2015-07-27 at 10.40.42 AM

Figure 4. ALMA image (left) vs. simulation (right). Brighter color represents stronger emission. The colorbars show that the simulated disk has much weaker overall emission. This could be explained by a steeper density profile. The overall structure of the gaps is remarkably similar. Remember, the figure on the left is the real image!

The ALMA image of HL Tau is best matched by a disk of dust and gas with three planets of approximately 0.2, 0.27, and 0.55 Jupiter masses. The imaged emission is coming from mm-size dust grains, which do not show spiral structure in the simulation. The models are still quite uncertain, as the authors had to make assumptions about the density distribution and thickness of the disk. But the results confirm that ALMA has begun to uncover planet formation at a scale previously reserved for artistic renditions. More such discoveries are sure to come, as ALMA has only begun to operate in this extended configuration. Stay tuned, and mind the gaps!


Roving Mars—In Utah by The Planetary Society

Students gather in the desert to answer the University Rover Challenge, pushing the limits of the tech that will drive future Mars exploration.


The story behind Curiosity's self-portraits on Mars by The Planetary Society

How and why does Curiosity take self-portraits? A look at some of the people and stories behind Curiosity's "selfies" on the occasion of the official release of the sol 1065 belly pan self-portrait at Buckskin, below Marias Pass, Mars.


No Major Problems with SLS Design, NASA Managers Say by The Planetary Society

A key review of NASA’s Space Launch System did not uncover any major problems with the rocket's design, officials said at Stennis Space Center near Gulfport, Mississippi.


August 19, 2015

The Blockstack Summit by Albert Wenger

About 8 months ago Joel Monegro, who is one of the analysts here at USV, wrote a great post titled “The Blockchain Application Stack.” The post lays out how the bitcoin blockchain can be combined with overlay networks and decentralized protocols to create a new application ecosystem. A lot of work has been accomplished since then by a great many individuals and organizations to bring this stack into sharper focus.

If you want to learn more about the progress, I highly recommend that you head over to Blockstack.org and check out the forums there. A lot of current discussion takes place in the Blockstack Slack.

Most excitingly there will be a Blockstack Summit here in New York City on September 12. If you are interested in topics such as the following you should definitely attend: Distributed hash tables, blockchain scalability, decentralized identity, secure messaging, smart contracts, decentralized storage, peer-to-peer reputation, electronic voting, blockstack governance, peer-to-peer markets, etc.

The format will be a set of 5 minute lightning talks followed by working sessions. If your favorite topic isn’t on the list above you can easily start your own breakout group.

What’s exciting about the summit is that it will bring together people working on actually implementing various parts of the Blockstack with academics studying distributed systems as well as businesses looking to build and use Blockstack based applications.

I am looking forward to attending and participating in a Q&A panel. Special shoutout to NYU Prof Lakshmi Subramanian for helping organize and secure a venue as well as to all the companies sponsoring the event: bitseed, OB1, onename, Chord and Tierion.

So head on over to Blockstack Summit and register now!


Our Brave New World of 4K Displays by Jeff Atwood

It's been three years since I last upgraded monitors. Those inexpensive Korean 27" IPS panels, with a resolution of 2560×1440 – also known as 1440p – have served me well. You have no idea how many people I've witnessed being Wrong On The Internet on these babies.

I recently got the upgrade itch real bad:

  • 4K monitors have stabilized as a category, from super bleeding edge "I'm probably going to regret buying this" early adopter stuff, and beginning to approach mainstream maturity.

  • Windows 10, with its promise of better high DPI handling, was released. I know, I know, we've been promised reasonable DPI handling in Windows for the last five years, but hope springs eternal. This time will be different!™

  • I needed a reason to buy a new high end video card, which I was also itching to upgrade, and simplify from a dual card config back to a (very powerful) single card config.

  • I wanted to rid myself of the monitor power bricks and USB powered DVI to DisplayPort converters that those Korean monitors required. I covet simple, modern DisplayPort connectors. I was beginning to feel like a bad person because I had never even owned a display that had a DisplayPort connector. First world problems, man.

  • 1440p at 27" is decent, but it's also … sort of an awkward no-man's land. Nowhere near high enough resolution to be retina, but it is high enough that you probably want to scale things a bit. After living with this for a few years, I think it's better to just suck it up and deal with giant pixels (34" at 1440p, say), or go with something much more high resolution and trust that everyone is getting their collective act together by now on software support for high DPI.

Given my great experiences with modern high DPI smartphone and tablet displays (are there any other kind these days?), I want those same beautiful high resolution displays on my desktop, too. I'm good enough, I'm smart enough, and doggone it, people like me.

I was excited, then, to discover some strong recommendations for the Asus PB279Q.

The Asus PB279Q is a 27" panel, same size as my previous cheap Korean IPS monitors, but it is more premium in every regard:

  • 3840×2160
  • "professional grade" color reproduction
  • thinner bezel
  • lighter weight
  • semi-matte (not super glossy)
  • integrated power (no external power brick)
  • DisplayPort 1.2 and HDMI 1.4 support built in

It is also a more premium monitor in price, at around $700, whereas I got my super-cheap no-frills Korean IPS 1440p monitors for roughly half that price. But when I say no-frills, I mean it – these Korean monitors didn't even have on-screen controls!

4K is a surprisingly big bump in resolution over 1440p — we go from 3.7 to 8.3 megapixels.

But, is it … retina?

It depends how you define that term, and from what distance you're viewing the screen. Per Is This Retina:

27" 3840×2160 'retina' at a viewing distance of 21"
27" 2560×1440 'retina' at a viewing distance of 32"

With proper computer desk ergonomics you should be sitting with the top of your monitor at eye level, at about an arm's length in front of you. I just measured my arm and, fully extended, it's about 26". Sitting at my desk, I'm probably about that distance from my monitor or a bit closer, but certainly beyond the 21" necessary to call this monitor 'retina' despite being 163 PPI. It definitely looks that way to my eye.

I have more words to write here, but let's cut to the chase for the impatient and the TL;DR crowd. This 4K monitor is totally amazing and you should buy one. It feels exactly like going from the non-retina iPad 2 to the retina iPad 3 did, except on the desktop. It makes all the text on your screen look beautiful. There is almost no downside.

There are a few caveats, though:

  • You will need a beefy video card to drive a 4K monitor. I personally went all out for the GeForce 980 Ti, because I might want to actually game at this native resolution, and the 980 Ti is the undisputed fastest single video card in the world at the moment. If you're not a gamer, any midrange video card should do fine.

  • Display scaling is definitely still a problem at times with a 4K monitor. You will run into apps that don't respect DPI settings and end up magnifying-glass tiny. Scott Hanselman provided many examples in January 2014, and although stuff has improved since then with Windows 10, it's far from perfect.

    Browsers scale great, and the OS does too, but if you use any desktop apps built by careless developers, you'll run into this. The only good long term solution is to spread the gospel of 4K and shame them into submission with me. Preach it, brothers and sisters!

  • Enable DisplayPort 1.2 in the monitor settings so you can turn on 60Hz. Trust me, you do not want to experience a 30Hz LCD display. It is unspeakably bad, enough to put one off computer screens forever. For people who tell you they can't see the difference between 30fps and 60fps, just switch their monitors to 30hz and watch them squirm in pain.

    Viewing those comparison videos, I begin to understand why gamers want 90Hz, 120Hz or even 144Hz monitors. 60fps / 60 Hz should be the absolute minimum, no matter what resolution you're running. Luckily DisplayPort 1.2 enables 60 Hz at 4K, but only just. You'll need DisplayPort 1.3+ to do better than that.

  • Disable the crappy built in monitor speakers. Headphones or bust, baby!

  • Turn down the brightness from the standard factory default of retina scorching 100% to something saner like 50%. Why do manufacturers do this? Is it because they hate eyeballs? While you're there, you might mess around with some basic display calibration, too.

This Asus PB279Q 4K monitor is the best thing I've upgraded on my computer in years. Well, actually, thing(s) I've upgraded, because I am not f**ing around over here.

Flo monitor arms, front view, triple monitors

I'm a long time proponent of the triple monitor lifestyle, and the only thing better than a 4K display is three 4K displays! That's 11,520×2,160 pixels to you, or 6,480×3,840 if rotated.

(Good luck attempting to game on this configuration with all three monitors active, though. You're gonna need it. Some newer games are too demanding to run on "High" settings on a single 4K monitor, even with the mighty Nvidia 980 Ti.)

I've also been experimenting with better LCD monitor arms that properly support my preferred triple monitor configurations. Here's a picture from the back, where all the action is:

Flo monitor arms, triple monitors, rear view

These are the Flo Monitor Supports, and they free up a ton of desk space in a triple monitor configuration while also looking quite snazzy. I'm fond of putting my keyboard just under the center monitor, which isn't possible with any monitor stand.

Flo monitor arm suggested multi-monitor setups

With these Flo arms you can "scale up" your configuration from dual to triple or even quad (!) monitor later.

4K monitors are here, they're not that expensive, the desktop operating systems and video hardware are in place to properly support them, and in the appropriate size (27") we can finally have an amazing retina display experience at typical desktop viewing distances. Choose the Asus PB279Q 4K monitor, or whatever 4K monitor you prefer, but take the plunge.

In 2007, I asked Where Are The High Resolution Displays, and now, 8 years later, they've finally, finally arrived on my desktop. Praise the lord and pass the pixels!

Oh, and gird your loins for 8K one day. It, too, is coming.

[advertisement] Building out your tech team? Stack Overflow Careers helps you hire from the largest community for programmers on the planet. We built our site with developers like you in mind.


Giant planets from far out there by Astrobites

Title: The growth of planets by pebble accretion in evolving protoplanetary discs
Authors: Bertram Bitsch, Michiel Lambrechts, Anders Johansen
First author’s institution: Lund Observatory, Department of Astronomy and Theoretical Physics, Lund University, Sweden.
Status: Accepted for publication in Astronomy & Astrophysics.

A flaw in common planet formation schemes?

Planets are ubiquitous. Therefore, building them must be straightforward, right? The standard, and for the most part, accepted idea sounds roughly like this:

  1. A molecular cloud contracts under its own gravity, forming a protostar due to gas accretion.
  2. A protoplanetary disk forms from the remnants of cloud-contraction around the young star. The disk material is comprised of gas and ~1% solids in the form of small dust particles.
  3. The particles grow somehow (the exact way is under much debate) until they overcome the meter size barrier, forming so-called planetesimals of a few km in size.
  4. These further grow by giant impacts among each other, a process which eventually leaves only a few bodies left.
  5. The most massive of the planets (greater than 10 Earth masses) accrete gas from the disk around them, and end up as gaseous planets (like Jupiter, or Saturn).
  6. Eventually, the continued accretion and radiation of the star make the disk remnants vanish and we are left with a fully assembled planetary system.

Each of the above bullet points is a story of its own, of which the authors of today’s paper tackle the time scale problems of point 4 . If all the material of the protoplanetary disk is stored relatively equally in planetesimals it is really hard to form gaseous planets, because usually all gas vanished before the planets grow big enough to accrete it onto their cores! Moreover, observations show that young stars are usually hosts of protoplanetary disks on the time scale of a few million years (half-life ~ 2.5 million years). When modelers now assume the minimum disk mass from the early solar system (the MMSN – Minimum Mass Solar Nebula) and try to form planet from the collisions between planetesimals, the disk has to have a much longer lifetime to reach masses high enough for the onset of runaway gas accretion, which is necessary for giant planets to accrete enough gas before it has vanished. This would mean no giant planets could form, which is obviously not the case.

Pebble accretion to the rescue

Bitsch et al. replace the planetesimal phase in the story above with pebble accretion. In the version used by the authors, instead of turning all solids in the disk into planetesimals, lots and lots of smaller rocks (‘pebbles’) form by collisions or sublimation/condensation processes until they are roughly mm-sized. At some places the solids collapse and form rocky cores with sizes from 100-1000 km. These cores then accrete many of the pebbles and eventually form planets. The central point is that under specific circumstances the growth via pebble accretion can be much faster than via collisions among planetesimals and therefore potentially solves the time scale issue from above.

Variation and time evolution

Further, as you look deeper into the physics of the evolution of the protoplanetary disk and the population of planets observed, the physics turns out to be very complicated, which has to be reflected in the simulations. Therefore, the authors introduce models of the migration of the planets in the disk (determining whether they end up close to the star or far away), and the change in temperature and density and therefore solid contents of the disk in time. The migration of planets is a subject of its own, crucially important for the explanation of ‘Hot Jupiters’ (massive giant planet in close-in orbits around their host star). Figure 1 shows the final outcome of a model where solid contents in the disk are 1%.

abc

Fig. 1: Final masses of planets depending on the initial conditions, formation distance to the host star r0, and formation time t0 in the disk. All planets below the white line have undergone massive gas accretion and end up as gas giants. The black lines indicate constant planetary masses. Bitsch et al. (2015)

 

The implications for… the Solar System

Every good theory for planet formation has to be able to explain how our own living room, the solar system, came into being. Doing so proves to be rather easy for the pebble accretion model and turns out to be consistent with the Nice model (video here), which explains the bombardment history of the inner solar system with unstable dynamics of Jupiter, Saturn, Uranus and Neptune. However, it fails to reproduce the initial conditions of another famous theory, the Grand Tack scenario, which explains the masses and orbital distances of Earth and Mars and the features of the asteroid belt by letting Jupiter and Saturn travel trough the inner solar system before 10 million years after the formation of the system. A specific prediction the authors can deduce from their models is noble gas enrichment in Saturn’s atmosphere. This is known for Jupiter and — using the pebble accretion scenario— holds for Saturn as well. This is only possible because the accretion of pebbles in this scenario is still sufficient to feed planetary cores very far away from the host star. Out there, the solid density is usually too low to grow planetary seeds, at least through common formation channels. This appealing ability of the pebble accretion model brings us to the next section, when transferring the model outcomes to other planetary systems.

The implications for… planet population synthesis

Figure 2 shows the regimes to which kinds of planets the cores grow for different starting conditions of the simulations of Bitsch et al.

abc

Fig. 2: Planet categories formed in the simulations as a function of formation time t0 and formation distance to the host star r0. The ‘ice planets’ regime is a whole new class of planets which has not been detected so far. Bitsch et al. (2015)

First of all, the authors predict a whole new class of planets which have not been found so far! The turquois regime indicates planets with a more massive core than envelope, but only a few Earth masses, such that they are smaller than ice giants like Uranus and Neptune. In the simulations the ice planets form far out in the disk, where all volatiles are frozen, such that their interior is mainly composed of ices. These planets show up very frequently in the simulations at late stages, at which time most of the gas is gone and they can not accrete much anymore. This kind of planets is not predicted by other planet formation models. Unfortunately, it will be hard to detect them with the current generation of planet observatories, as the ice planets are far out and pretty small…

Additionally, the simulations of the authors feature some hints on the formation channel of ‘Hot Jupiters’. Because it is so easy for the pebble accretion model to form gas giants at large orbital distances it is likely that a lot of them form in the outer disk and one of them is eventually perturbed by the others, scattered into the inner disk and coming to halt just close to the host star.

 

All in all, my main take-away from this work is that the formation of planets at large orbital distances can be increased by much. This would solve a riddle in current planet formation theories and could further explain extremely massive planets with large semi-major axes, without invoking large-scale gravitational instabilities in the disk.


LightSail Lands New Hardware for Laser Ranging by The Planetary Society

A cluster of small mirrors will be added to LightSail's aft hull to allow engineers to precisely track its position as it sails around the planet next year.


August 18, 2015

My Favourite Film of 1985. by Feeling Listless



Film As with plenty of films on this list, St Elmo's Fire is so ingrained in my psyche it's almost impossible for me to offer a critique. The usual meagre research for this article/post/exposition reveals to me that Rob Lowe won Worst Actor Razzie for his performance as college nostalgist Billy Hicks and so close am I to this hundred and ten minutes, it's impossible for me to objectively judge if they were right.

All I can think about is the key scene when he comforts Demi's flibbertigibbet Jules with an explanation for St. Elmos Fire and how warm Lowe is, so much so it comforted me on a number of occasions during my teenage years (when I probably saw the film as often as Adventure In Babysitting).  On the few occasion people have needed me to help them in similar ways, I'm sure it's Billy Hicks I'm channelling.

Except the problem with that scene, with his speech is that it's completely false. Here's what Billy says:

BILLY:
Jules, y'know, honey... this isn't real. You know what it is? It's St. Elmo's Fire. Electric flashes of light that appear in dark skies out of nowhere. Sailors would guide entire journeys by it, but the joke was on them... there was no fire. There wasn't even a St. Elmo. They made it up. They made it up because they thought they needed it to keep them going when times got tough, just like you're making up all of this. We're all going through this. It's our time at the edge.
Let's pick our way through this now and painfully debunk the message utilising passages from what seems like a pretty well referenced Wikipedia entry.

BILLY:
Electric flashes of light that appear in dark skies out of nowhere.

WIKI:
Actually, St. Elmo's fire is a form of matter called plasma, which is also produced in stars, high temperature flame, and by lightning. The electric field around the object in question causes ionization of the air molecules, producing a faint glow easily visible in low-light conditions. Roughly 1000 volts per centimeter induces St. Elmo's fire; the number depends greatly on the geometry of the object. Sharp points lower the required voltage because electric fields are more concentrated in areas of high curvature, so discharges are more intense at the ends of pointed objects.

Conditions that can generate St. Elmo's fire are present during thunderstorms, when high voltage differentials are present between clouds and the ground underneath. Air molecules glow owing to the effects of such voltage, producing St. Elmo's fire.

The nitrogen and oxygen in the Earth's atmosphere cause St. Elmo's fire to fluoresce with blue or violet light; this is similar to the mechanism that causes neon lights to glow.

BILLY:
Sailors would guide entire journeys by it ...

WIKI:
It is a sign of electricity in the air, which can interfere with compass readings, making it poor as a navigational tool and some sailors may have regarded it as an omen of bad luck and stormy weather. Other references indicate that sailors may have actually considered St. Elmo's fire as a good omen (as in, a sign of the presence of their patron saint).

BILLY:
.... but the joke was on them... there was no fire.

WIKI:
Physically, St. Elmo's fire is a bright blue or violet glow, appearing like fire in some circumstances, from tall, sharply pointed structures such as lightning rods, masts, spires and chimneys, and on aircraft wings or nose cones. St. Elmo's fire can also appear on leaves and grass, and even at the tips of cattle horns. Often accompanying the glow is a distinct hissing or buzzing sound. It is sometimes confused with ball lightning.

In 1751, Benjamin Franklin hypothesized that a pointed iron rod would light up at the tip during a lightning storm, similar in appearance to St. Elmo's fire.

BILLY:
There wasn't even a St. Elmo. They made it up.

WIKI:
St. Elmo's fire is named after St. Erasmus of Formia (also called St. Elmo, one of the two Italian names for St. Erasmus, the other being St. Erasmo), the patron saint of sailors.
Disappointing, but educational.  You could also rationalise it as Billy deliberately disregarding the science because sometimes (and this is even a shock to a rationalist like me) faith and hope are more palpable and useful emotions than reality.

St. Elmo's Fire is one of the few films on this list for which my memory fails when trying to remember when I first saw it.  On release in 1985, I would have been too young to even know what it was let alone see it at the cinema (though as I type this I remember seeing adverts for it in Smash Hits).  But I did know John Parr's theme song very well, and may have bought a seven-inch of it at Penny Lane Records in Matthew Street before even seeing the film.  Eventually I owned a VCI copy of the VHS, then the dvd in the late 00s.  It's on Netflix too.

The soundtrack even on cassette was very expensive and became a key Christmas present in about 1992.  What I do have a vivid memory of is inviting three of my course friends, Melanie, Alex and Madeline to my dorm room for a group study session, one of the rare occasions my course mates visited me at home and putting the soundtrack on while we waited.  Mel smiled and said that she'd just paid £25 for the cd.  That was just eight years after the film's release which is the same length of time between now and the first Iron Man film.  It's our time at the edge, and how.


Networks, Firms and Markets by Albert Wenger

There has been a renewed interest in the impact that digital technology is having on the boundaries of the firm. In particular there is a sense that networks operating on top of platforms are competing effectively with traditional firms. Examples include AirBnB as competing with hotels without owning any real estate, Uber with transportation companies without owning and cars, LendingClub with banks without having a balance sheet, Upwork (formerly ELance/ODesk) a systems contractor without employees, etc. (Aside: I am planning to write a future post about the many profound differences between these which get lumped together as if they were somehow the same).

When most people write about the boundaries of the firm they refer to Ronald Coase’s groundbreaking work on “The Nature of the Firm.” This work is well known but it dates back to 1937 and while Coase provided a breakthrough in thinking his grasp of the nature of transaction costs was limited. A lot of work since then has gone into giving much more specificity to the nature of the costs by economists such as Oliver Williamson, Michael Jensen, Bengt Holmstrom, John Roberts, Oliver Hart, Paul Milgrom and many others. If you want a great overview of this work I highly recommend the reader put together by Randall Kroszner and Louis Putterman. If you want a nutshell summary of the most important tradeoff keep reading on.

There are two fundamental problems to organizing economic activity: motivation (getting people to expend effort, companies to allocate resources to an activity) and coordination (arranging activities so that they fit together, such as making sure a part needed for production arrives on time). It turns out that there is a tradeoff between the two and the degree of tradeoff is influenced by the available information technology. The TL;DR version: better, faster, cheaper access to information reduces the degree of the tradeoff and makes the network model possible.

To see the tradeoff just consider what happens inside a firm. The firm pays employees a fixed salary. As a baseline that salary is the same independent of what specific task the employee works on. That allows the firm to highly coordinate the activities of employees. But of course it reduces the employee motivation to work hard (because the salary doesn’t respond to the intensity of effort). Pretty much everything on top of salary that you see in compensation, such as stock options, bonus plans, etc. is designed to bring some of that motivation back.

A competitive market is the opposite extreme. Each party gets to keep the additional gains arising from effort so motivation is maximized. Restaurants are a great example and this explains at least in part the extraordinary work efforts in that segment. But the activities between individuals restaurants are not coordinated. Try ordering an appetizer from one place and a main dish from another with a dessert from a third. In markets then we see devices such as long term contracts as a way to bring back some degree of coordination.

Now the beauty of the advance in information technologies is that they shift out the tradeoff frontier (or envelope): it is now possible to achieve more motivation and more coordination than was possible before. Here is the intuition for it. Without any communication technology you only act on the information available to you locally. At that time in history we had lots of markets and a few small firms (e.g., a farm, a shop, a local manufacturer) in which the employer works directly with the employees. As you add limited communication technology, such as the telegraph, you can relay information to a central office which can figure out the best course of action and send decision back. That stage gave rise to large firms that eventually spanned the globe. Now comes something like the Internet and everyone can have access to all the information, that is each node can evaluate its actions in the global context of any benefits from coordination. This gives rise to networks and other platforms and hybrid organizations that organize the gathering and sharing of information. If you want a more rigorous analysis of this I suggest the second essay from my dissertation.

But it is super important to understand that the motivation-coordination tradeoff has *not* been eliminated. It has just been reduced. And that means we will still see both firms and markets in addition to the network model even though the network model will likely grow a lot. If a market (or strategy within a market) calls for extreme coordination, the firm model will outperform the network. For instance, to create the perfect consumer experience for a smartphone Apple has integrated vertically backwards all the way to making its own chips. Conversely if motivation is of the essence the market model will outperform a network. Athletes in individual sports such as tennis or golf are a great example.


August 17, 2015

No Adrics. by Feeling Listless



TV The Office for National Statistics published the list of chosen baby names in 2014 and I thought I'd try and see how many of them have clearly been selected by Doctor Who fans. So after feeding all of the tv companion names (with the loosest definition to save arguments) into an Access database along with all the names and numbers (which I'm telling you so you don't think I spent a lot of time over this) we discover the following. Ace is surprisingly popular ("Ace!") and there are three new Peris in the world along with eight Romanas.  I'd do the spin-off companions too but I'd be here all day (and there's no point disappointing fans of Olla The Heat Vampire any further).

JACK - 5804
HARRY - 5379
AMELIA - 5327
GRACE - 2785
ADAM - 1790
ROSE - 990
MARTHA - 807
RORY - 755
JAMIE - 749
AMY - 668
JACKSON - 602
SARAH - 601
JOHN - 601
SARA - 596
VICTORIA - 584
ZOE - 580
CLARA - 464
BEN - 364
WILFRED - 234
POLLY - 193
TEGAN - 130
RIVER - 116
ALISTAIR - 105
STEVEN - 92
ASTRID - 87
IAN - 80
LEELA - 79
MELANIE - 70
CHRISTINA - 64
RIVER - 63
ADELAIDE - 54
CRAIG - 50
ACE - 39
JAMIE - 35
BARBARA - 25
MICKEY - 24
MIKE - 18
SUSAN - 15
KATARINA - 15
ROMANA - 8
DONNA - 6
PERI - 6
JO - 3
LIZ - 3

Poor Adric.


August 16, 2015

Data, books, and bias by Charlie Stross

In May, I posted on my blog brightly-coloured pie charts presenting some data about literary awards. They weren't the most gorgeous graphics ever, but they conveyed my point: that the more prestigious the prize, the more likely the subject of the winning narrative will be male. Nothing earth-shatteringly new, but solidly presented. I considered the thing done and went to bed. I woke up to a world gone mad: the post had gone viral. I spent the next three weeks fielding emails and interview requests from global media.

This response took me by surprise because, as I've said, what I was saying was not new. I've been talking about it for years; many have talked about it. But what was new, apparently, was how I presented the data.1 Pictures speak louder than words. Pictures about numbers seem to speak very loudly indeed.

Many of us aren't very good at seeing past our own assumptions (Just look at some of the comments on Judith's post.) We are biased towards our own experience. Data can mitigate that bias. It's hard to deny numbers. Especially if they're numbers others can verify, taken from acknowledge and expert public sources, collated using consistent, transparent methods.

If we want to understand something, we have to be able to see it clearly. Numbers help with that. That's what I've been doing the last couple of months: counting, talking about counting, persuading others to count, and hammering out methods and process.

But I'm a novelist. My forte is words. Numbers are less familiar territory. The last time I paid any attention to the manipulation of numbers was a very long time ago when I studied Mathematics at 'A' level. (And, full disclosure, I ended up dropping that A level in favour of English.) In other words, I do not self-identify as a data geek. I've had my road-to-Damascus conversion, yes; I believe; but (sadly) the conversion did not come packaged with instant mastery of statistical manipulation. At best, then, I'm data-curious.

For that blog post in May, I analysed the last 15 years’ results for half a dozen prestigious book-length fiction awards: Pulitzer Prize, Man Booker Prize, National Book Award, National Book Critics’ Circle Award, Hugo Award, and Newbery Medal.The method was simple: for each prize I read the winning book of each year, or a couple of reviews of same plus a sample of the text, and assigned the book to one of four columns: from the point of view of a woman or girl, from a man/boy, from both, or from a character whose gender in some can't be slotted neatly into the usual gender binary. (For the sake of brevity I labelled that Unsure.) Then I collated the gender of the writer 3 with that of their protagonist/s. Then I found a free, web-based chart-making site, and turned the results into pie charts.4

Here's what the one for the Hugo Award for best novel looked like:

hugo.png

"Unsure" can apply to both author and protagonist. In this single case it is Ann Leckie's Ancillary Justice.

At this stage I'm less interested in the Why than the What and the How Many. Why, in my opinion, can only emerge when we dig deeper and get a clear picture of what's actually happening (and manage to look past our biases--we all have biases). That will take time. We need surveys of writers' organisations and ask: When you began your book, what influenced the gender of your protagonist? And then ask agents how they chose the books to represent. And then publishers what numbers of books about women and about men were submitted, accepted, supported etc. Which were submitted for review, and where. Which were praised, and by whom. Which were put on new fiction tables at the front of bookstores and libraries. Which submitted to prizes, and why. Which were long-listed, then short-listed, then chosen for the prize. Then remembered.5

But it has to start somewhere. And that's what I've done. I started Literary Prize Data6, a group to count, share, collate, present, and discuss book numbers. Right now we number about 35 from three continents.7

The group is new: one week old. But already we have people working on the Edgars, the Campbell, taking a more granular look at the Hugos, and more. Some of us are genuine data geeks. Some novelists. Some academic researchers. Some readers. We could use all the help we can get. If you want to help, sign up. Count something. Help design the best way to interpret and present what you and others have counted. Actually counting, and then finding different ways to parse the results, and different ways to display those results, makes the reality more concrete than ever. If we're transparent about what we're counting and how, the conclusion—that not only more men than women win prizes, but that even the women who win are likely to win for writing about men—is difficult to argue.

To go back to the Hugos, another of our group, Eric, plotted the running ten-year average for the percentage of women authors nominated for the Hugo award. As he says, "As Nicola suspected8 things were getting better for a while before dipping in the 90's and then partially recovering more recently... But what's really interesting...is what we see if we show the percentage of women in the membership of the SFWA."9

women and sf stats graphic.png

See the caveats in the footnote, but from the mid 70's to the mid 90's, the percentage of women nominated for the Hugo award tracks the percentage of women in the field. That is, setting aside any barriers to entering the field, once "you're writing SF/F professionally the odds of being nominated for a Hugo were roughly the same for men and women. Since then, the percentage of women in the field has continued to rise, indicating falling barriers to entry, but the award nominations no longer track the number of women in the field, which suggests a higher level of discrimination in the awards selection."10

Soon we'll be able to update the Hugo information. We hope also to have a breakdown of the shortlists in each category. Stay tuned. Meanwhile, if this effort intrigues you and you'd like to help, please consider joining the volunteers who last week began a concerted effort to track and collate this information. The more who count, the less each has to do.


1 Pie charts have been used a lot to show bias in publishing. See, for example, VIDA and the work Niall Harris is doing at Strange Horizons.
These are awards that, in my opinion, influence the author’s subsequent book sales and/or career arcs. It’s subjective: I haven’t pulled together reliable data on book sales pre- and post-awards. (Though here are links to three articles which include cherry-picked numbers and anecdata on the National Book Award, the Man Booker, and the Hugo Award.)
3 I assumed that when reviews talked about an author as “she” or “he” that author identifies as female or male respectively.
4 I made it clear on my blog that I was open to corrections. I still am.
5 I talk about this in more detail in an interview with the Seattle Review of Books. And also explain why it's so important that we have stories about women.
6 I have also taken the Russ Pledge, to talk about books by women whenever I talk about books. I then tweaked the pledge to privilege books not only by but about women.
7 I'm not the only one counting. See footnote 1.
8 I have an idea about why, but zero data to support it.
9 "I'm using SFWA membership as a proxy for 'people professionally writing science fiction and fantasy.' It's the best proxy I can think of, but it's not perfect. The other caveat is that I could only find data for three years: 1974, 1999, and 2015 (from [Nerds of a Feather]." If you have more/better data, please share!
10 Eric goes on to say that "one significant aspect of this pattern is that it mirrors what has gone on in other fields. If we look at scientists working in the life sciences, for example, the number of women entering the field has approached parity in recent years, but this hasn't been reflected in the percentage of women in higher-level positions (such as full professor) or the most prestigious awards (see, for example, here and here. A succinct way of putting this is that 44% of biological scientists were women by 2000, but 16% of Nobel prizes for physiology and medicine have gone to women in the last ten years)."


Extracting the BBC Genome: Storyville. by Feeling Listless



Film Storyville is one of the BBC's primary documentary strands, co-funding and licensing non-fiction films from a range of sources under the guiding hand of producer Nick Fraser. Beginning on BBC Two in 1997 and now well embedded onto BBC Four, it's become a valid alternative to the televisual storytelling of the dominant form of presenter led and talking heads documentary, although obviously includes examples of both.

Between 1997 and 2002, Storyville was just on BBC Two.  Then for the opening months of 2002 it was exclusively on BBC Knowledge and when that closed shifted to BBC Four before eventually sharing between Two and Four, the former carrying the prestige, usually theatrical releases and repeats in a late night BBC Four on BBC Two slot after Newsnight and later.

As with Close-Up, I've often wondered which films were included in the strand across time and thanks to the BBC Genome that information is now readily available, so I decided to make a list.

Which isn't to to say there aren't other similar lists already. But the TVDB is pretty inconsistent and incomplete for much of the time. The Wikipedia covers the period beyond the Genome, as does obviously the BBC Programme page which began gathering scheduling information some time in 2007.

There isn't anything I can find which goes back as far as 1997 when the strand began.

Until now.

Which isn't to say this list will be perfect.  Obviously.

Firstly, I've left out transmission dates and descriptions.  For the purposes of this exercise I'm not sure how useful they are.  But if that is important for the purposes of your exercise, you should be able to find that information by copy/pasting the title into the Genome search box, along with the word Storyville if necessary.

There may also be omissions.  My process was to search for "Storyville" on the Genome and rekey the results.  If it wasn't designated as a Storyville broadcast on the listing pages of the Radio Times, it won't be here.

The Wikipedia, for example, has Winged Migration as a Storyville entry, as does Box of Broadcasts (so would have been listed as such on the Freeview EPG at some point, even though the Genome doesn't mention it so that information wasn't in the Radio Times listing and it has its own BBC programme page (but also this stubby page on BBC News where it is caption as Storyville).

Do let me know if I've missed anything.

I've ignored duplicate broadcasts too.  Some of them have been repeated dozens of times.  Paris Brothel.

Also, I've added director surnames in brackets where that information is available in the Genome which tends to be when the film was broadcast on BBC Two, for which entries were longer due to the position of the listing in the Radio Times.  Only post digital switchover did BBC Four's listings really become as detailed as the so-called main channels.

So the best I can say is that this list is largely accurate, I think.  I have only included the programmes listed in the Genome, which ends at the end 2009, so if you are interested in what happened after that you'll also have to consult the BBC programme pages or the Wikipedia.




1997

Little Dieter Needs to Fly (Herzog)
Nobody's Business (Berliner)
Wednesday (Kossakovsky)
Naughty Boy (Jensen)
Paradise Lost (Berlinger & Sinofsky)

1998

East Side Story (Ranga)
When We Were Kings (Gast, 1996)
Kurt and Courtney (Broomfield, 1997)
Don't Look Back (Pennebaker, 1967)
444 Days (Woodhead)
Year of the Dogs (Cordell)
Wako: the Rules of Engagement (Gazechi)
Gigi, Monica and Bianca (Abdehaoui & Dervaux)
Moon Over Broadway (Hegedus & Pennebaker)

1999

A Small Town in Poland (Marzynski)
Resurrection (Stenderup)
Fragments: Jerusalem (Havilio)
Photographer (Jablonski)
I Was A Slave Labourer (Holland)
An American Love Story: Welcome To America (Fox)
An American Love Story: I've Fallen and I Can't Get Up (Fox)
An American Love Story: It's Another New Year and I Ain't Gone (Fox)
An American Love Story: Chaney and the Boy (Fox)
An American Love Story: True Love (Fox)
An American Love Story: It's My Job (Fox)
An American Love Story: We Were Never Ozzie and Harriet (Fox)
Hitman Hart: Wrestling With Shadows (Jay)
A Cry from the Grave (Woodhead)
Grey Gardens (Maysies & Maysies, 1976)
Out of Phoenix Bridge (Hong)

2000

The Last Cigarette (Rafferty)
My Best Fiend (Herzog)
Genocide, the Judgement (Christoffersen)
One Day in September (Macdonald, 1999)
Norman Mailer - Oh My America: Farewell to the Fifties (Copans & Neumann)
Norman Mailer - Oh My America: Beyond the Revolution (Copans & Neumann)
Donald and Luba (Boyd)

2001

I Loved You (Kossakovsky)
Black and White in Colour (Erdevicki-Charap)
The Sweetest Sound (Berliner)
Fashion Victim - Killing of Gianni Versace (Kent)

2002

The Last Cigarette
Closing Down (Rossetto)
Three Salons at the Seaside
Rats in the Ranks
Baria's Big Wedding
George Wallace - Settin' the Woods on Fire: Part 1
George Wallace - Settin' the Woods on Fire: Part 2
First Contact
Black Harvest
Marlene
Cod Wars
Startup.com
Southern Comfort
Chain Camera (Dick)
My Sperm-Donor Dad
The Tour (Alexandowicz)
Down from the Mountain (Pennebaker, Doob & Hegedus)
More Sex Please, We're Scandinavian (Jensen & Beckendorff)
Town Bloody Hall (Pennebaker & Hegedus)
The War Room (Pennebaker & Hegedus)
Startup.com (Pennebaker & Hegedus)
Deliver Us From Evil
A Cry from the Grave
Carle Del Ponte
Much Ado About Something
Who Is Bernard Tapie? (Zenovich)
This Is Palestine (El-Hassan)
The Settlers (Walk)
Pie In The Sky (Fremont & Fremont)
Nico Icon (Ofteringer)
The Game of Their Lives
Ajax
Gods of Brazil
Picasso Days
The Gugulethu Seven
Meeting My Daughter (Heurlin)
The Unquiet Peace (Danziger)
The Jazzman from the Gulag (Salfati)
More than a Life (Holland)
Greedy in Thailand (Vasselin)
Robots Are Us (Salfati)
Domestic Violence (Wiseman)
Life and Debt in Jamaica
Cod Wars
Dark Days
Shadowplay
The Cuban Game (Cuenca)
Sincerely Yours (Phakathi)
Loius Mate's India (Mate)
See What Happens
Cry for Argentina
Kabul ER
Scottsboro
Muhammad Ali - the Greatest
Ghosts of Attica (Lichtenstein)
Last Party 2000 (Hoffman)
Journeys with George (Pelosi)
The Great War: The Somme (1964)
My Terrorist (Cohen-Gerstel)
Simon and I
Milosevic: How to Be a Dictator
Cool and Crazy (Jensen)

2003

Family (Saif)
A Texan Murder in Black and White
Buddhism: Wheel of Time
The Smith Family
Chavez, Inside the Coup
Waco, the Rules of Engagement
Robert Capa: In Love and War
Clusterbomb Footprints
Russia from My Window
Paris Brothel
The Man with an Opera House in his Living Room
Somewhere Better (Erdevicki)
Remember the Family
Cerro Rico: the Mountain that Eats Men
Seabiscuit
Live Forever (Dower)
Etre et Avoir (Philibert, 2002)
Gimme Some Truth
Morning Sun (Gordon & Hinton)
My life as a Spy (Woodhead)
To Live Is Better than to Die
Stevle (James)
Junoon: the Rock Star and the Mullahs

2004

I Am Trying to Break Your Heart (Jones)
The Crockettes
Power Trip (Devlin)
Congo: White King, Red Rubber, Black Death (Bate)
My Louis Armstrong Years (Kounda)
The Weather Underground (Green & Siegel)
War Feels Like War
The New Americans (5 eps)
Nelson Mandela Accused #1
Sophiatown
The Guguletu Seven
Comandante (Stone)
Condor: the First War on Terror
Germany: behind the Wall
The House of Saud
Army of One (Goodman)
Death at the Crossroads
Israel's Generals: Dayan
Israel's Generals: Rabin
Israel's Generals: Sharon
Trembling Before God
Gay Dads Paternal
Love and Diane
Standing In the Shadows of Motown
Marcel Ophuls: the Memory Hunter
Who Am I Now? (McDonald)
Games in Athens
The Importance of Being Elegant (Amponsah and Spender)
Game Over: Kasparov and the Machine
Control Room
The Fight
The Beauty Academy of Kabul (Mermin)
Christ Comes to the Papuans
The Curse of Oil
Parallel Lines
See You in the Future
Conrad Black: the Last Press Baron
RFK
Jesus Christ and George Bush
India: Final Solution
My Land Zion
Stones in the Park
Songs from The Producers

2005

Death on the Staircase (4 eps)
Citizen King
Prisoner of Paradise
The Liberace of Baghdad
Made In China
Barca: the Inside Story
The Natural History of the Chicken
Before the Flood: Tuvalu
Why We Fight
Another Road Home (Elon)
Life on the Tracks
McLibel
Dr Goebbels Speaks
Blind Spot: Hitler's Secretary
The Fog of War: Eleven Lessons from the Life of Robert S McNamara (Morris, 2003)
Lost in La Mancha (Futton and Pepe, 2002)
Chairman George: to Beijing via Athens
French Beauty (Lamche)
Excellent Cadavers - a Story
Me and My 51 Brothers & Sisters
A Cry from the Grave
Srebrenica: Never Again?
Small Pain For Glory
Death on the Staircase
Shake Hands with the Devil
Cod Wars
Guerrilla: the Taking of Patty Hearst (Stone, 2004)
The Wild Blue Yonder (Herzog)
The White Diamond (Herzog)
A Very English Village (Holland) (4 eps)
Peace One Day
Sitting for Parliament (Davis)
The Last Waltz (Scorsese)
The 50 Years War: Israel and the Arabs
The Wonderful World of Dogs
Liberia: an Uncivil War
The Standard of Perfection: Show Cats
Animaliclous
Little Dieter Needs to Fly
The Standard of Perfection:Show Cattle
How Vietnam Was Lost
Sir! No Sir! The GI Revolt
Cane Toads
The Fall of Fujimori
Jungle Magic
Kinsey
Standards of Perfection:Show Cats
Darwin's Nightmare
Our Brand Is Crisis

2006

Rat
The Smell of Paradise
My Architect
Born into Brothels: Calcutta's Red Light Kids
The Fine Art of Whistling
Hollywood and the Holocaust
Bus 174
Berlusconi Rules OK! - Viva Zapatero
Shakespeare behind Bars
Philip and His Seven Wives
Gangs of Medellin
The Pipeline Next Door
Tarnation (Caouette)
Sunny Intervals and Showers
A Company of Soldiers
Albert Maysles - the Poetic Eye
Overnight (Montana)
Riot On!
Behind the Couch
My Life as a Spy
What Remains
Abel Raises Cain
Prostitution behind the Veil
The Emperor's Naked Army Marches On
Hammer and Tickle
The Prisoner, or How I Planned to Kill Tony Blair
Orthodykes
Street Fight
The Russian Newspaper Murders
The American Ruling Class
The Team
37 Uses for a Dead Sheep
When the Levees Broke

2007

Blog Wars
Vision Man
Diameter of the Bomb
Godless In America
Milosevic on Trial
So Much So Fast
New York Doll
Al Franken - God Spoke
Abduction
Screamers
A Story of People in War and Peace
Cuba! Africa! Revolution!
Oswald's Ghost
How Much Is Your Life Worth?
Black Sun
You Must Be Number One - Shanghai Circus School
Laughing with Hitler
Office Tigers (2 eps)
Kike like Me
Once in a Lifetime: the Extraordinary Story of the New York Cosmos
Every Good Marriage Begins with Tears
TV Junkie
Andrew and Jeremy Get Married (Boyd)
The Glow of White Women
This Film is Not Yet Rated
The Undertaking
Iraq in Fragments
Belgrade Radio Warriors
Gimme Shelter
The Madrid Connection
Mr Vig and the Nun

2008

Stranded! The Andes Plane Crash Survivors (Heller)
Jonestown: the World's Biggest Mass Suicide (Nelson, Smith and Walker)
The Devil Came on Horseback (Stern & Sundberg)
The Polish Ambulance Murders
Orthodox Stance
Blue Blood
Very Russian Geniuses: My Class
Dance with a Serial Killer
Tito's Ghost
Dolce Vita Africana
Somebody Has to Live - the Journey of Ariel Dorfman
All White in Barking (Isaacs)
The English Surgeon
My Secret Agent Auntie (Collingridge)
The Battle for Jerusalem
My Israel
Flipping Out: Israel's Drug Generation
Bob Dylan's Indian Birthday
The Bonzos
The Biggest Chinese Restaurant in the World (4 eps)
Death of a Wag
The Father, the Son and the Housekeeper
The Chuck Show
The Burning Season
Flying: Confessions of a Free Woman (2 eps)
1968
The Day after Peace (Gilley)
Dirty Tricks: the Man Who Got the Bushes Elected
Roman Polanski: Wanted and Desired
Shot in Bombay
When Borat Came to Town
Operation Film-maker
Prodigal Sons
I'm Not Dead Yet
Please Vote for Me

2009

Blast!
Wild Art - Oily and Suzi Paint Predators
Heavy Load
Ghosts of the 7th Cavalry
Maradona - in the Hands of the Gods
Bulletproof Salesman
The Children's Ward
Hammer and Tickle
The Kawasaki Candidate
The Jazz Baroness
Up for Debate -Team Qatar
The Baby and the Buddha
The Jew Who Dealt with the Nazis
The Genius and the Boys
Blind Sight - Everest the Hard Way
Angels of Rio
The Trials of Oppenheimer
The Time of Their Lives
How The Beatles Rocked the Kremlin
Napoli: City of the Damned
Men of the City
War Heroes: Section 60 Arlington Cemetery
Hi Society - the Wonderful World of Nicky Haslam
The Horse Boy
Simon Mann's African Coup - Black Beach


August 15, 2015

The Analysis by Simon Wardley

Ok, this post provides a quick analysis of the Scenario. As a guide, this sort of analysis should take about 30 minutes. To get the most out of this exercise, read the scenario post, write your plan and then read this analysis. In a final post, we will go through gameplay.

The Analysis

First, lets start by creating a basic map. Our users are data centre operators, they have a need for a mechanism of improving Data Centre efficiency in electricity consumption, we have our software product which is based upon best practice use of an expensive sensor that we purchase and a custom set of data. This is shown in figure 1.

Figure 1 - Initial Map



In this exercise, I'm going to slowly build up the map. Normally, I would just dive into the end state and start the discussion but that'll be like one of those "it's therefore obvious that" exercises in maths which often confounds others.  

First of all, I'm going to add some bits I know e.g. we anticipate an opportunity to sell into Brazil (I'll mark as a red dotted line) and we have a US software house in our market selling a more commodity version as a utility service (I'll mark as a solid red line as it's something that is definitely happening). From the discussion with the head of sales (who was rather dismissive of the US effort) and the strategy, I already know we're going to have inertia to any change, so I may as well add that in (a black bar).

Figure 2 - Brazil and US.


However, we also know that the US version provides a public API and has a development community building on top of it. The US company is also harvesting this, probably through an ILC like model. The consequence of this, is the US company will start to exhibit higher rates of apparent innovation, customer focus and efficiency with proportion to the size of their ecosystem. Those companies building on top of their API act as a constant source of differential for them. I've added that in the figure below.

Figure 3 - ILC play.


Given the US company growth last year and that a shift from product to utility is often associated with a punctuated equilibrium, I can now take the figures and put together a P&L based upon some reasonable assumptions. Of course, we're missing a lot of data here in particular the development cost of the software etc. However, we'll lump that into SG&A.

Figure 4 - P&L and Market.


Ok, so what I now know is that we seem to be a high gross margin company (i.e. a juicy target) and a good chunk of our revenue is repeating software licenses. If this is a punctuated equilibrium (which seems likely) then I expect to see a crunch time in 2020 between us and our US company as we will both have around 50% MaSH. Unfortunately, when that happens then they're likely to have higher rates of efficiency, apparent innovation and customer focus due to their ecosystem play. Furthermore I'm going to have inertia to any change probably due to existing practices, business and salespeople compensation.

If I do make a utility play then I'm going to need to gain the capability to do this, raise the capital needed to build a utility and launch fast. Let us suppose this takes two years. Then I'll be entering a market where my competitor has 8 years of experience with a large & growing ecosystem and 100% MaSh of the utility business (worth £30M to £60M). I'll have no ecosystem, no MaSh and a salesforce probably fighting me and pointing out how our existing business is worth £144M to £173M. In the worst case, if I haven't explained the play properly then they could even be spreading FUD about my own utility service and trying to get customers to stick with the product.

Even my own board could well push against this move and the talk will be of cannibalisation or past success.  Alas, I know our existing business is a dead man walking. Post 2020 things are going to be grim and by that I mean grim for us. Despite the competitor only being 3% of the market, I've already left it late to play this game. I've got some explaining to do to get people on board.

Unfortunately there is more bad news. Let us look at that the other changes in the market, such as the shift of sensors.

Figure 5 - Change of sensors.


Now, we've already seen signs of inertia in our organisation to using these sensors. As the product manager says they're not as good as the old. However, we also know that as an act becomes a commodity then practices co-evolve and new methods of working emerge. Hence the future systems probably won't have one sensor in the data centre but dozens of cheap ones scattered around. Unfortunately, our software encodes best practice around the expensive product based sensor and if the practice evolves then our software is basically legacy. I've added this to the diagram below, however rather than using a solid red line (something we know is happening) then in this case I've added a dotted line (something we anticipate or an opportunity).

Figure 6 - Co-evolution


So, our business is being driven to a utility and we don't have much time. Even if we get started now then by the time we launch we'll be up against an established player with a growing ecosystem. Our own people will fight this change but even worse our entire system will become legacy as commodity sensors lead to co-evolved practice and new software systems designed around this. So along with my head of sales and marketing fighting me, I'm pretty sure I can add the product manager and a good chunk of an engineering team that has built skills around the old best practice. 

Now, if you're used to mapping then you'll have spotted both the punctuated equilibrium and the danger of co-evolution. As a rule of thumb, these forms of co-evolution can take 10-15 years to really bite (unless some company is deliberately accelerating the process). Hence, even if we somehow survive our current fight in the next five years, we're going to be walking smack bang into another one five years later.

Of course, at this point I need to start to consider the other players on the board e.g. the US competitor. They're already providing a utility play, so we can assume that they have some engineering talent in this space. This sort of capability means they're likely to be pre-disposed to building and using more commodity components. The chances are, they're already thinking about the commodity sensors and building a system to exploit this. That could be a real headache. I could spend a couple of years getting ready to launch a cloud based service based upon the expensive product sensors and suddenly find I'm not only behind the game but the competitor has pulled the rug under me by launching a service based upon commodity sensors. I'll be in no man's land.

The other thing I need to look at is that conversion data issue. I know it's evolved to a product but it could easily be pushed to more of a commodity or provided through some API and play some form of open data ecosystem game on me. I've shown this in the following diagram.

Figure 7 - Data Ecosystem


I've now got a reasonable picture of the landscape and something I can discuss with others. Before I do, let us check the proposed "Growth and sustainability in the data centre business" strategy.

First up was expansion into Brazil. This will require investment and marketing but unfortunately we're not dealing with the issues in our existing market. At worst, we could spent a lot of cash on laying the groundwork for the US company to chew up Brazil after they've finished chewing us up. Still, we need to consider expanding but if we do so in our current form then we're likely to lose.

Second was building a digital service including a cloud based provision for our software system that enable aggregated reporting and continued the licensing model. Ok, one of the killer components of the US system is the API and the ecosystem it has built around this. We could easily invest a significant sum and a few years building a cloud based service, enter the market and be outstripped by the encumbent (the US company) because of their ecosystem and even worse find our entire model is now legacy (because of co-evolved practice). I know it's got the word "digital" and "cloud" in the strategy but as it currently stands then this seems to be a surefire way to lose.

Thirdly, the strategy called for investment in sales and advertising. Well, we've plenty of cash but promoting a product model which as it stands is heading for the cliff and may become entirely irrelevant seems a great way of losing cash.

Lastly, we're to look into the use of the data conversion product. Ok, this one doesn't seem so bad but maybe we should drive that market to more of a commodity? Provide our own Data API? 

On top of all this, we have lots of inertia to deal with. Now that we understand the landscape a bit better then we can craft a strategy which might actually work. Of course, I'll cover that in another post. However, in the meantime I'd like you to go and look at the scenario, look at your original plan and work out how you might modify it.

Happy Hunting.


A scenario by Simon Wardley

A scenario for you to run through. Have a think, write down your answers and later on I'll add a post as to things you should have considered.

The Scenario

You’re the CEO of a UK based company serving the European Market. Your company produces a software system that monitors a data centre's consumption of power in order to determine whether power is being used effectively. The system involves a proprietary software system which runs analytics across a data from a sensor that is attached into the data centre. The sensor is a highly expensive piece of kit which monitors both the electricity input into the building, the temperature of the building & airflows. The analytics software is based upon best practice for use of this sensor. The sensor itself consumes conversion data that your company creates.

You’re profitable with a revenue in excess of £100M p.a., a net margin of 15% and an annual growth rate of 20%.  You have a healthy cash flow and reserves of around £25M.  The process of setting up a new client involves installing a sensor, setting up the equipment and a two year license fee for the software. Around 40% of your revenue comes from re-occurring license fees and 85% of the initial 1st year costs for a client is related to the purchase of the sensor.

Whilst you have some competitors in Europe, most of these are custom built solutions. You’re the only with a software product. There’s a more developed market in the US and even a software as a service offering which uses the same sensors but the software is sold on a utility basis rather than a license fee. The US solution also provides cross company reporting, industry analytics and a public API, something which your system does not. However, as your head of marketing points out, the US competitor (a much larger company) has been operating in Europe for almost 7 years and represent less than 3% of the market though their CEO claims they are growing rapidly and doubled in size last year. There are a number of other company products built on your competitor's APIs and a fairly active development community on this. However your head of sales chimes in that we rarely come across them in competitive tenders and in any case there have been some blog posts about your competitor 'eating up' the business model of some of those products by adding similar capability into their own system. The head of sales points to data showing that in the European market, your company has around 40% MaSh (which is holding steady) and the current market represents 70% of the total applicable market. Both the head of sales and the head of marketing agree we should focus on increasing our MaSh (market share) by focusing on sales and advertising.

Your head of operations points out that there is a range of new, more commodity like sensors that has been launched in China by an extremely large and well respected manufacturer. They’re far simpler, vastly cheaper (about 1/100th of the price) and highly standardised. However, they are also basic and lack the sensitivity of the sensor we use. The product manager points out that we have attempted replacing the expensive sensor with one of these cheaper versions but the performance and analysis was severely degraded. The product, operations and sales manager all agree that these cheaper sensors aren't upto the job. In the conversation, the product manager points however to another opportunity. One of the significant costs in the system is in the conversion data which requires extensive testing and modelling of various bits of kit within the data centre.  Whilst this is done in-house, there is now a product available on the market which offers this conversion data. It’s not as good as our data at the moment but the product is vastly cheaper than our in-house operations and we could therefore reduce costs here.  Your head of marketing supports the idea as there is some recent evidence that despite the benefit (in terms of energy savings through efficiency) that the system allows, there is some concern over the high cost of the software in the market. The product manager believes we should investigate though this was faced with some resistance from both the head of operations and the head of IT. You do not feel you have a deep enough technical understanding to answer this.

On the revenue side, the head of sales points out there is a growing market of data centres in Brazil which currently no-one is providing a solution for. They consider this to be a highly attractive future market and would like to investigate. Your head of strategy also agrees. 

The new strategy which is focused on a vision of “Growth and sustainability in the data centre business” has highlighted a number of possibilities. First is expansion into overseas markets such as Brazil. Second is provision of a more digital service including a cloud based service for provision of the software (enabling aggregated reporting) but provided on a license basis in order not to create conflict with the existing model but also to counter any threat from the US system.  Thirdly, we should undertake a significant marketing campaign to promote our solution in the existing market. Lastly the report focuses on efficiencies in operation including investigating the use of the data conversion product that is available. 

What do you do?

Add your 'answers' in the comments below.

Once you're done, then you can move onto The Analysis


Halcyon Days. by Feeling Listless



Film Fascinating insight into what was state of the art in digital manipulation twenty-years ago, courtesy of Film 95. Look at all the paper on Dr Boudry's desk!


SOFIA or American Sniper? by Astrobites

Title: First exoplanet transit observation with the Stratospheric Observatory for Infrared Astronomy: Confirmation of Rayleigh scattering in HD 189733 b with HIPO

Authors: Daniel Angerhausen, Georgi Mandushev, Avi Mandell, et al.

First Author’s Institution: NASA Goddard Space Flight Center

“Observing with SOFIA is like riding a horse and trying to shoot a dime.” This fantastic analogy was said by the first author of this paper at the ERES conference this past May. Now, let me explain. There hasn’t been too much talk on astrobites about SOFIA, (Stratospheric Observatory for Infrared Astronomy), although in 2012 there was a great overview article if you want to check that out. The punch line, though, is that SOFIA is a 2.5-meter telescope that is mounted on a massive Boeing 747-SP aircraft, which flies up to 45,000 feet through Earth’s stratosphere. And in today’s bite, Angerhausen et al., successfully used SOFIA to observe a transiting exoplanet system. Aka, they rode the horse and shot the dime. This might sound insane to you but there is some real motivation for why scientists and engineers decided to do this.

It’s no secret that ground-based observations are difficult since you have to look through the Earth’s atmosphere. Water vapor, in particular, causes a lot of headaches because we are also interested in measuring water vapor in exoplanetary atmospheres. Think of holding two different translucent filters, one in front of the other. If the front filter was red and the back one was blue, you just might be able to discern the color of the blue filter. But, if the front one is red and the back one is also red, you might guess that it’s red, but it also could be clear, yellow, or any other light-ish color. Water vapor, in our atmosphere, is akin to the first red filter and, in with respect to exoplanet atmospheres, we are trying to discern the color of the second red filter. SOFIA, doesn’t get us completely through that first red filter (i.e. space), but it does get us most of the way there (i.e. Earth’s stratosphere). This is a great alternative to expensive space based observers.

That being said, SOFIA has its fair share of observational obstacles. Because you are observing in the stratosphere, you still are looking through some of the Earth’s atmosphere. However, at those heights, gases like water vapor, carbon monoxide, and methane are not affected by seasonal variations like on the ground. This makes it easier to remove what we call the, “telluric features” (Earth’s atmosphere) from your observations.

Now, imagine being in an aircraft and having to hold your telescope perfectly still for the duration of an exoplanetary transit (~2 hours!!!). It’s actually worse than shooting a dime from the back of a horse. It’s more like riding a horse while holding a laser pointer and trying to keep the light on the dime for 2 hours. Although this is incredibly difficult and your observation is often limited by how well you remain still while pointing, if you can keep track of exactly how you moved throughout the observation, you can remove those effects from your data. This is like if I drove from DC to NYC and you followed me in a helicopter writing down every turn and bump so that later (if I happened to be blind folded) you could get me back to DC from NYC.

With all this in mind, Angerhausen et al. put these concepts together and did the very first exoplanet observation using the SOFIA. Because this is new, it behooved the observers to look at a planet that was already studied. They chose HD 189733 b, a planet the size of Jupiter, but at the same distance to its parent star that Mercury is to our Sun. We call these hot Jupiters and they are fairly easy to take observations of since they are bright and orbit fairly quickly. Nevertheless, there have been big debates over the discovery of water, methane and carbon dioxide. These debates arise because it is unclear whether or not we do in fact see these species or if we are looking at clouds or hazes in the atmosphere. One group proposed that Rayleigh scattering dominated HD

Rayleigh Scattering phenomenon

Rayleigh Scattering phenomenon

189733 b’s atmosphere. Rayleigh scattering is the same light scattering phenomenon that makes Earth’s sky blue, as seen in the little cartoon and it affects the blue portion of a planetary atmosphere.

In order to settle some of these debates, Angerhausen et al.’s observations were designed to detect the presence of Rayleigh scattering and/or confirm or reject the notion of the presence of water vapor in the planet’s atmosphere. The following image is the final light curve of the transit. I feel guilty skipping over the analysis that went into producing this precise data, so one should really go and briefly check out what goes into something like this.

Transit light curve of hot Jupiter HD 189733b taken on board SOFIA with the HIPO instrument. Transit light curves are just brightness as a function of time. The dip in brightness is the result of the planet passing in front of the star.

Transit light curve of hot Jupiter HD 189733b taken on board SOFIA with the HIPO instrument. Transit light curves are just brightness as a function of time. The dip in brightness is the result of the planet passing in front of the star.

 

The light curve is only half the story, though. In order to detect atmospheric features, we not only need to detect the dip in brightness of the planet crossing in front of the star, we also need to detect small changes in transit depth as a function of wavelength. This will ultimately give us the spectrum we need to confirm or deny Rayleigh scattering and water features. Take a look at this bite if you’d like to learn more about this procedure! The next figure shows the two data points (in black) next to previous observations made with the Hubble Space Telescope, and Spitzer. Right away you will notice the increase in SOFIA HIPO data towards shorter (bluer) wavelengths. This increase in slope can also be seen in Earth’s atmosphere and is a clear indication of Rayleigh scattering. It also matches the red and green HST data previously taken. This tells us that there is some presence of particles, or condensate grains, in the upper atmosphere that is scattering the blue light of the spectrum.

Final data points taken with SOFIA (in black). The other colors represent previous observations of HD 189733b. Because the black points match the red and green HST data, this confirms the presence of Rayleigh scattering.

Final data points taken with SOFIA (in black). The other colors represent previous observations of HD 189733b. Because the black points match the red and green HST data, this confirms the presence of Rayleigh scattering.

Although there was no water vapor found in these observations, the authors were able to do two incredible things.

  1. They proved for the first time that SOFIA can be used to do precise exoplanet transit spectroscopy
  2. They settled the “Rayleigh scattering” debate for HD 189733b

In the end of their paper, Angerhausen et al. argue for a dedicated exoplanet instrument onboard the SOFIA aircraft. If this is realized, you can expect many more groups to take up the challenge of shooting the dime while riding the horse.


Curiosity update, sols 1012-1072: Sciencing back and forth below Marias Pass by The Planetary Society

Since my last update, Curiosity has driven back and forth repeatedly across a section of rocks below Marias pass. The rover finally drilled at a spot named Buckskin on sol 1060, marking the drill's return to operations after suffering a short on sol 911. Now the rover is driving up into Marias Pass and onto the Washboard or Stimson unit.


August 14, 2015

Where Have All the Women Gone? by Charlie Stross

(Charlie's away and his blog has been taken over by invisible assassins.)

It's as regular as summer thunder. A very serious article or a very serious tweet or a very serious wonder-aloud in a convention bar.

"How come women don't write science fiction/fantasy/insert subgenre-not-romance here? Or why haven't they written it since, like, well, last week when I read one by a lady and I thought it was pretty good and I think, did it win an award or something? But there aren't any others and I don't get it." Sometimes with bonus, "Do I have to write it myself?"

I used to say I had a superpower. In person, online, you name it. I'm invisible. A very famous publisher once said, "She might as well write in invisible ink for all the notice she gets."

That Buffy episode with Invisible Girl? Yep. Except the part where (SPOILER SPOILER SPOILER SPOILER) she's whisked away at the end to a secret training facility for spies and assassins.

Point being that not only was she not alone, she had a whole tribe to belong to, doing important and deadly things. And the visibles of the world would never see her coming.

It's that dratted second X chromosome. The X factor. Crosses you right out.

Women have a shelf life. When they're young and cute, they get attention--a fraction as much by the numbers as the boys, and often relegated to the short reviews or the niche commentators, but it happens. Then as they age, the boys become revered elders. The girls undergo a winnowing process that pulls out one or two as tokens of their gender, and those are the wise ones, the names always cited when listing women in genre. The rest are erased. And the very serious pundits inquire, "Why aren't there any women in genre?" Or, "Why didn't women write in genre before, like, last week?"

They did. We did. All the way back. We have always been here. We have always written science fiction.

This is what the Women in Science Fiction project is about. And the Women in Science Fiction Storybundle ((link explains the concept, and lists the books in the bundle). Along with many lists and shout-outs and twitterstorms.

It's not even conscious. Hear a woman's voice, see a woman's name, slide right on by. Just today I had a twitter conversation with a very nice man, very concerned that women writers weren't featured in a certain popular series on a certain eminent blog. He was trying to redress what he saw as unfairness.

And yet that series contains multiple entries by and about women writers. They're talked about, read, commented on. They get good numbers. My contribution has been going on for over a year now and is in its second set of books by one woman writer.

Invisible ink. Hear no women, see no women, take no notice when women speak.

And the older the women are? The more invisible, inaudible, unnoticed they are. Unless they're the tokens, of course. The "I included her, therefore I included all women" names that are on every list, because that makes it all right. Right?

Lucky for us publishing has changed so profoundly in this millennium, and works that used to be erased are now coming back--and with them, the authors who were dropped and silenced over the years. Lucky too that the culture has shifted and people of all genders are more aware of what's been happening, for the most part subliminally, to anyone not straight, white, male.

We were here all along. We never stopped being here. Now, finally, we're not letting ourselves be dumped off the shelf. Even by very serious people with very good intentions who just, you know, didn't notice. And think it's terribly unfair.


My Interview with Juliette Binoche. Sort of. by Feeling Listless

Film Speaking of fiascos, another on the list of "how did this happen" films is Bee Season which along with The Juror, August Rush, Running With Scissors is my go-to example of a project gone amok and creative hubris. Here's the trailer:



And the Kermode review.



Right. Ok. It's rubbish. It has a scene in which Richard Gere spends ten minutes explaining Buddism very slowly to his daughter and casts Juliette Binoche as a kleptomaniac and that's not the main storyline. I've always wondered what attracted her to the film. So when The Guardian decided to have a Q&A with the actress I had to ask:



Which is why I tend to be quite sympathetic to actors in films who're always at the mercy of whatever takes the editor and director use and indeed to the director. If a usually good actor seems out of sorts in a piece, it's often because of circumstances beyond their control and also quite often an actor can only be as good as the part. The Sue Storm debacle is not Kate Mara's fault.


Fantastic Four: The Mudslinging begins. by Feeling Listless

Film Having had a cold this past couple of day, I entirely missed the Hollywood Reporter piece about Fantastic Four which is glorious in its detail but lacks just the right about of information to make a Final Cut or The Devil's Candy style book about the fiasco a tempting prospect. Notably this paragraph:

"As filming wound toward an unhappy close, the studio and producers Simon Kinberg and Hutch Parker engaged in a last-minute scramble to come up with an ending. With some of the cast not fully available at that point and Kinberg juggling X-Men: Apocalypse and Star Wars, a lot of material was shot with doubles and the production moved to Los Angeles to film scenes with Teller against a green screen. "It was chaos," says a crewmember, adding that Trank was still in attendance "but was neutralized by a committee." Another source says the studio pulled together "a dream team," including writer and World War Z veteran Drew Goddard, to rescue the movie. Whether the final version of the film is better or worse than what Trank put together is a matter of opinion, of course, but the consensus, clearly, is that neither was good."
Wow,  The usually brilliant Drew Goddard allegedly was sucked into this singularity as well. Did he actually write and direct any of the closing material?  Was he another bystander on the road and we'll discover he didn't really have anything to do with it either in the end?  What again, we ask, was (the curiously unmentioned in this piece) Matthew Vaughn's participation?

Meanwhile, here's the also usually brilliant Richard Brody in the New Yorker seeming to make the case for the film almost being a comic book film for people who don't like comic book films, even though in my experience, people who don't like comic book films will never like comic book films.


GPS velocity records by Goatchurch

I’ve got a Global Top Inc FGPMMOPA6H GPS module (datasheet) in my hang-glider data logging device. Using the command packet:

$PMTK314,0,50,1,1,10,0,0,0,0,0,0,0,0,0,0,0,0,0,0

I’ve programmed it to give me a GPRMC every 50 cycles, a GPVTG every cycle, GPGGA every cycle and a GPGSA every 10 cycles.

The command $PMTK220, 200 is used to set the length of the cycle at 200ms, so I’m getting a positional and velocity reading 5 times a second.

The code for controlling all this is here. Note that my code does not contain hundreds of lines of #defines of the form:

#define PMTK_API_SET_NMEA_OUTPUT 314

that you tend to get in other people’s programs for the purpose of referencing the this-will-never-change-hardware-encoded 3-digit string ‘314’ by the arguably more readable (ie I will argue with you) string ‘PMTK_API_SET_NMEA_OUTPUT’ that serves no purpose, isn’t interpreted by anything except the preprocessor, and you have to look it up to get back to the number that is actually documented in the manual. Why is this controversial? </rant>

The GPS device records I’m interested in are the GPGSA (GPS Time, Position and fix related data) and the GPVTG (Course and speed information relative to the ground).

I gave up on the default GPRMC (recommended minimum navigation information) record that claims to contain both, except it’s missing the crucial value of the altitude (which is only in GPGSA) and the speed is given in knots rather than kph. I do, however, need it for the date record (GPGGA only has time from midnight), which is why it’s maxed out at every 50 records to avoid wasting bandwidth. The GPGSA is for the “dilution of position” data, which is not enough to compute accuracy, so I should get rid of that one.

Accuracy is actually controlled by matters like the analog timing conversions done inside the device, so it’s probably not observable to itself. I mean, if you really had to have this number then you could carry a super-accurate second device that really knows its position, and then compare the readings to this cheap device. But then why not throw the cheap device away and use the super-accurate one? And repeat for the next generation of accuracy.

There’s a theory that the velocity record is computed from the doppler shift of the satellite signals, rather than taking the difference between subsequent GPS positions, but there’s no mention of this in the datasheet.

Let’s check this out.

The first thing to establish is what does the velocity record correspond to. The records come down the serial line like so: GPGSA, GPVTG, GPGSA, GPVTG,… a pair every 200ms.

I need to know if the GPVTG corresponds to the GPGSA in front or behind or if it’s calculated in the middle of the cycle.

After some experimenting with different cycle rates and baud rates I’m satisfied that all the records are computed at once, and then serialized out back-to-back in the order GPGGA, [GPGSA, GPRMC,] GPVTG, so I’ve associated the velocity to the preceding position record, and the position record contains a time value while the velocity record doesn’t.

Suppose we’ve got a sequence of times, positions and velocities:

(t1,p1,v1), (t2,p2,v2), (t3,p3,v3), …

A cycle rate of 200ms means that t2-t1=t3-t2=0.200seconds.

Now, if the velocities are computed by differencing the positions, then as it could only refer to positions in the past, you would expect v2 to match the value (p2-p1)/(t2-t1).

However, if the velocity is instantaneously calculated from the doppler shift, then v2 would match the value (p3-p1)/(t3-t1), in other words an average that spans the forward and backward position samples.

For this I’ve written the EstimateVelocityChord() function which takes one of my flight GPS sequences and generates some graphs

gpsvel
[avg error is by height on the grid where d0 is the sample count before present, and d1 is sample count after present (intuitively speaking), and the green splurge is showing the scatter plot of errors at the optimal selection]

The answers are different for each of the different flight recordings, of course, and this is highly dependent on the turning circles of the glider (it’s an unusual trace where there’s never a straight line and there are no sharp movements), but generally if you take the chord length from the position at around -1.2seconds to the position at around +0.8seconds and divide the time then your average velocity error is 0.65m/s.

I wonder if we can use this to predict GPS error.

Reason I ask is I’ve been carrying around two GPSs (if you don’t count the one in the vario) and they disagree at times, like in this example at the start of a flight:
gpspair

Unfortunately I can’t work with the data on that flight because it’s in Austria and I’ve not made the conversions from those GPS coordinates into metres. But I can do it for a recent flight in England nailed to a hill for four hours where the second flight computer datalogger was in the tail of my harness and crashed after the first half hour.

Here’s the overlayed GPS traces, which look more globally offset than locally glitched like the one above.
gpspair2

Below I’ve plotted along the time axis (horizontal) the difference between the two positions (in white) against the sum of the error between the instantaneous measured velocity of the doppler and the velocity calculated from the GPS positions.
gpspairerr
I don’t see a correlation.

I’ll keep this pile of code to hand for when I’ve got more data. I’ll try to mount the second device next to the first one on the base-bar instead of in the harness where it’s moving around. And I’ll look for cases where there is a distortion as in that first diagram as it’s extremely unlikely that the errors in the velocity measurements will have tracked that crazy unflight-like motion exactly.

The point is to be able to filter for the sections where the data is reliable in order to extract the flight constants. I’ve still not got anywhere with this data analysis — other than to prove that most of my original ideas aren’t going to work.

My current strategy is to snip out short sequences of the flight of about 5 to 10 seconds in length and then hope to categorize them before deriving the flight envelope characteristics. Being able to discard segments where the GPS data is bogus will be a useful stage in the processing.


How I&#39;m listening to music right this second. by Dan Catt

djPro

The shutting down of This Is My Jam and this post over here: The music Web is now so closed, you can’t share your favorite song reminded me to jot down a note about how I'm listening to music right at this very moment.

This is the first time I've been able to listen to music all week, which is kind of unusual, Monday was getting ready for Zachary's birthday, Tuesday I was in London and listening to Podcasts rather then music on the train there and back. Wednesday was Zachary's birthday and so far this morning I've been catching up with all the usual email/tumblr/RSS admin.

So just now I've opened up Spotify to look at what's waiting for me in the new "Discover Weekly" playlist, 30 songs a couple of hours of music. Each week a create a new playlist "Discover Weekly 001" and so on, drag the tracks from the current Discover playlist to the new one.

I only decided to start doing this the 2nd week, I had to go back through my last.fm played tracks to rebuild the first week.

Archiving is now my default behavior on the web. I'm not sure what I'm going to do with all these playlists of suggested tracks but it sure feels like a good idea to keep a record of them for some future project/listening.

Once the playlist is made it's time to switch over to Pro version of djay PRO which will happily queue up and play playlists from Spotify. I much prefer djay PRO over Spotify for playback when I'm not exploring music and doing something else (like coding). It uses The Echo Nest API to match key, BPM and genre for similar tracks in it's Automix Radio feature.

And that's how it goes... Spotify Discover Weekly goes in, I listen to that, pick a track I particularly like from it, queue that up as the root track for Automix Radio and let it go.

When I am paying attention to music, often just before I do the washing up or eating with the kids I'll use Spotify to explore new music using their "Related Artists" feature, but guess I'll save that for another time and get back to work, while listening to this week's Discover playlist.


SLS Engine Roars through Flight Test in Mississippi by The Planetary Society

NASA completed the sixth and next-to-last test firing of its RS-25 engine Thursday in Mississippi. Four RS-25 engines power the Space Launch System core stage.


ESA's cool new interactive comet visualization tool based on amateur imaging work with open data by The Planetary Society

A terrific new visualization tool for comet 67/P Churyumov-Gerasimenko demonstrates the value of sharing mission image data with the public. The browser-based tool lets you spin a simulated 3D view of the comet. It began with a 3D model of the comet created not by ESA, but by a space enthusiast, Mattias Malmer.


August 13, 2015

On Platforms and Ecosystems by Simon Wardley

This stuff is a decade old for me and I can barely drag myself to repeat it. But, I've read lots of daft stuff recently on platforms and ecosystems normally out of the mouths of half witted strategy consultants, so I will. 

The reason why you build a platform is to enable an ecosystem. A platform is simply those components (ideally expressed through APIs) that your ecosystem exploits.

The reason why you build an ecosystem is for componentisation effects and to exploit others through data mining on consumption. 

If you create a platform of commodity components (ideally as utility services) then you not only enable the ecosystem to quickly build (increase agility) but reduce costs of failures. By mining what they build (by looking at consumption of your components) you can use this to identify patterns useful for others. Hence, you can leverage your ecosystem to spot useful patterns which you can then commoditise to new components in the platform to help grow the ecosystem. This is a model known as Innovate - Leverage - Commoditise and it's so old, it's dull. You can call it network effects if you must.

Effective exploitation of an ecosystem depends upon you actively detecting those new patterns, the speed at which you can detect new patterns and the size of the ecosystem. 

IF effectively exploited then your apparent rate of innovation, customer focus and economies of scale all increase with the size of the ecosystem. 

A few basic pointers.

1) If you don't focus on user needs and reducing friction (i.e. making it easy to use) then you lose. No-one will turn up or those that do will quickly leave for something else or build their own out of desperation.

2) If you limit your platform to internal (i.e. company only) then your ecosystem will be smaller than a company which exposes their platform to the public. Their rate of apparent innovation, customer focus and efficiency will massively outstrip yours as their ecosystem becomes larger. You lose.

3) If you fail to data mine consumption then you won't be able to leverage the ecosystem to spot new patterns that are useful to the ecosystem. Your ecosystem and platform will stagnate compared to a competitor that does this. You lose.

4) If you do mine your ecosystem and aggressively harvest without giving the ecosystem reasons to stay ... everyone will run away. You lose.

5) If you build a platform based upon product concepts then the cost of data mining consumption becomes high and the speed low compared to a platform providing utility components through an API. If you're trying to build a platform of products against a competitor who is providing a utility then - you can guess. You lose.

6) If you build a platform with components that are not industrialised (i.e. commodity like) then the interfaces will continuously change and your ecosystem will not be able to rely on you. If you're up against someone who industrialises those components then ... you lose.

7) If you have little to no ecosystem and decide to take on a large ecosystem in the same space without co-opting then assuming they are public, provide industrialised components through an API  as a utility, focus on removing friction & user needs whilst data mining effectively then ... you lose. You never get a chance to catch up.

8) If you build new components on a platform and fail to implement a mechanism of evolving those components to industrialised services then you build up technical debt. Over time, you build new upon new and this becomes spaghetti junction. Your platform creaks and collapses. The fastest way I know to do this is to have one team building new stuff, one team taking care of the platform and no-one in between. This creates almost an internal war of them vs us exacerbating the problems of technical debt. Against anyone with a faintest clue of what they're doing ... you lose.

9) If I say phrases like ILC, two factor market, supplier ecosystem and you go "eh?" ... you'll probably lose. There are many forms of ecosystems with many different models and mechanisms of exploitation. Try to learn the different types.

10) If you think platforms are all about marketing ... you lose.

11) If you think platforms are all about engineering ... you lose.

12) If you think platforms are easy (ps. I built the first platform as a service in 2005 and ran a large single sign on and imaging platform back between 2001-2006 with many millions of users) then don't even bother. You'll lose.

13) If you think the secret is to build an API specification, call it a standard, even an open standard and vendors will all come and build against it creating your ecosystem in the sheer delight of your wonderful gesture ... oh dear, you're in so much trouble. Cheaper to open your wallet to others and say "help yourself".

There's more but I'd rather gnaw my leg off than talk about platforms and ecosystems again. This is enough to begin with.


Small, but powerful by Astrobites

Title: Atmospheric Mass Loss During Planet Formation: The Importance of Planetesimal Impacts

Authors: H. Schlichting, R. Sari & A. Yalinevich

First author’s affiliation: MIT

Paper status: submitted to Icarus

Earth and its Atmosphere

Do you know what the mass of Earth’s atmosphere compared to the planetary mass is? It’s about one in a million. Only one in a million. In comparison, Venus – often called Earth’s sister planet, because it is very similar to Earth in mass and radius – has an atmosphere-to-planet ratio of about ten thousand. As a curious Astrobite reader, you might be thinking: “Interesting. Go on and tell me the reason for it!” At this point, I have to disappoint:  there is no clear answer to the question yet. Nevertheless, there are multiple hypotheses. An attractive explanation is that Earth’s atmosphere used to be more massive in the past – as suggested by Venus’ atmosphere – but was depleted later. The authors of today’s featured paper assume that impacts of the Earth by small objects caused a reduction of the atmosphere’s mass. The exciting part of their work is that they figure out what size of impactors are potentially best in reducing the atmosphere. If you are curious about the physics behind their study, read the next paragraph; if you only care about their result, go straight to the conclusion. (Warning: you will miss some useful conceptual considerations that are relevant for exoplanets as well.)

Figure 1: Impact of a massive object impacting a planet. After the collision, atmospheric mass is lost close to the impact as well as on the other side through a global shock propagating through the planet.

Figure 1: Impact of a massive object impacting a planet. After the collision, atmospheric mass is lost close to the impact as well as on the other side through a global shock propagating through the planet.

Large and small impactors trigger different forms of atmospheric loss

The authors seek to get a rough estimate of the best impactor size based on the dominant processes and under several assumptions, without getting lost in too many details. Let’s go through the fundamental scenarios that can happen step by step. If you want to eject mass from an atmosphere through a collision, the impactor must pass through the atmosphere and reach the surface of the planet. It is known from previous studies that the relevant physical quantity in this process is momentum, and thus mass and velocity. Impactors would have a distribution of velocities, but neglecting higher velocities is fine for their purpose of a rough picture.For simplicity, the authors assume the same velocity for all impactors of roughly the threshold velocity not to be captured by the gravity of the planet (the so called escape velocity v_{esc}=\sqrt{2GM/r}). The chosen velocity is motivated by the fact that most objects with masses much higher velocities would just fly by the planet. The impactors must have a minimum mass m_{min} to eject part of the atmosphere. By assuming that the impactors are of similar density \rho_{imp} and using \rho=m_{min}/(4\pi r_{min}^3), they reformulate their expression in terms of minimum size r_{min}. Objects have to be at least r_{min} \sim 2 km in radius, otherwise they are destroyed in the atmosphere before they can hit the ground.

Now, there are giant and small impacts. In Figure 1, you can see a sketch of an impact with a massive object that produces a giant impact. The impactor passes through the atmosphere, collides with the surface and ejects material close to the location of impact. To simplify the calculations, the authors assume that the entire atmosphere has a constant temperature (physicists say the atmosphere is isothermal) and that no energy is “lost” by heat in the flow through the atmosphere (in physical terms, the process is adiabatic.) In the sketch, the impactor is so massive that it induces a strong shock that travels through the entire planet and causes a loss of the atmosphere on the other side of the planet. This scenario is called global atmospheric loss. In contrast, smaller impactors are not strong enough to induce a shock propagating through the planet. They cannot cause a loss on the other side, and only produce a local loss close to the impact location as seen in Figure 2. Although a single small impact ejects less mass from the atmosphere, the sum of many small impacts could produce significant total mass loss. Taking into account the above assumptions, an interesting question is: what impactor size causes the highest atmospheric mass loss compared to the impactor’s mass? To test this, the authors consider impacts of fixed size and study how much total mass in impactors is needed to get rid of the entire atmosphere.

Figure 2: Illustration of mass loss by an impact with a small body. Assuming all impacting bodies have the same density, but different sizes, the larger ones are more massive. The relevant distance to consider for loss is the distance between the location of impact (impact site) from the reference height of the atmosphere (scale height). The larger theta is, the larger the distance becomes and the more mass and correspondingly a larger radius is needed to eject mass in that direction.

Figure 2: Illustration of mass loss by an impact with a small body. Assuming all impacting bodies have the same density, but different sizes, the larger ones are more massive. The relevant distance to consider for loss is the distance between the location of impact (impact site) from the reference height of the atmosphere (scale height). The larger theta is, the larger the distance becomes and the more mass and correspondingly a larger radius is needed to eject mass in that direction.

What do they find out, what does it mean?

Surprisingly, they find that small objects that are r=2 km in radius, which is only slightly larger than the minimum size to eject part of the atmosphere (r=\sqrt{3} r_{min}), are most efficient in reducing the atmosphere (see Figure 3). Based on this result, the authors suggest that Earth lost most of its atmosphere while it was still forming, when plenty of small km-size bodies flew around and impacted the Earth. According to the authors it could very well be that today’s atmosphere mass corresponds to the equilibrium between the delivery of new material to Earth by small objects and the amount of mass that got ejected by small impacts. Also, the relatively small amount of material ejected from the planet by these small bodies could also explain why there are no signs of a global magma flow, which would be expected if the atmosphere got lost during a giant impact. To summarize, it is fair to say that – regardless of whether this happened in Earth’s atmosphere – it is certainly interesting that small impactors are so efficient in ejecting planetary atmospheres. With respect to the young field of exoplanetary (atmospheric) science, it contributes an important new aspect to take into account for explaining the formation of planetary systems. Future studies could test whether the results change with different impact speeds and impactor sizes.

Figure 3: Total mass needed to eject the entire mass of the Earth for objects of different size. Interestingly, objects only a little bit larger than the minimum size are most efficient in ejecting the atmospheric mass.

Figure 3: Total mass needed to eject the entire mass of the Earth for objects of different size. Interestingly, objects only a little bit larger than the minimum size are most efficient in ejecting the atmospheric mass.

 

 


Community service: Vetting my local library's children's space books by The Planetary Society

Space fans, here is a valuable community service that you can perform in your neighborhood: Vet your school library's space book collections. My kids' elementary school librarian asked me to take a look at the nonfiction space book collection and cull any outdated or just wrong books. I culled quite a few, and am now recommending some replacements.


The Planetary Society Visits Camp Imgur by The Planetary Society

Last weekend Imgur hosted a weekend camp for their members and invited Merc Boyan to come out and talk about space and The Society!


Video: NASA Prepares for Sixth SLS Engine Test by The Planetary Society

The Planetary Society will be at NASA's Stennis Space Center this Thursday for the sixth test firing of the RS-25 engine, which powers the Space Launch System.


On the future. by Simon Wardley

I often talk about the importance of situational awareness. The technique I use for this, is known as Wardley mapping and you can read about it on CIO magazine. If you're new to this then the rest of the post won't make sense and so I'd advise you to save some time. Tl;DR it's complex.

Once you have a map, it becomes fairly easy to see how a market will evolve. There are numerous common economic patterns from componentisation to co-evolution to inertia along with various forms of competitive gameplay that can be used to manipulate this change and create an advantage. With a map (which provides position and movement) of an economic space you can examine the line of the present and work out points to attack. From here, working out strategic play (i.e. why attack here over there) is fairly easy. I've summarised this in figure 1.

Figure 1 - Determining future from now.


With reasonable situational awareness you can anticipate certain changes and prepare for them through scenario planning. You can avoid getting caught out unnecessarily. This is more than enough (along with operational efficiencies through removing duplication and bias) to compete against most companies. However, there are some more advanced techniques. 

When we look at a map, certain aspects of change are more predictable than others. I've provided a list in figure 2.

Figure 2 - Predictability of Change.


For example, existing trends (i.e. stuff that is happening) are fairly obvious in terms of what (i.e. the trend) and when (i.e. now). There's little advantage in this stuff despite it filling up endless management journals. At the same time there's the unknowable e.g. genesis of a new act or impending product to product substitution. The best you can do here is scan the environment, notice it's happening (i.e. it's become an existing trend) and react accordingly. You can't anticipate this stuff i.e. Blackberry couldn't anticipate the iPhone would appear and disrupt it.

However, the knowable stuff is the most interesting because here you can create an advantage and you can anticipate what is going to happen (but not when) or vice versa. In certain special cases you can do a reasonable job of both through the use of weak signals but I'll get onto that.

For example, when something new appears we can anticipate that if there is competition (supply and demand) then it'll evolve! We can even specify the stages of evolution (for activities, practices, data and knowledge) e.g. an act will evolve from genesis to custom built to product (+rental) to commodity (+utility). We can state how its properties will change (from the uncharted to industrialised) and how competition will drive this. I even know that on average it'll take 20 - 30 years to go from genesis to the point of industrialisation. We know an awful lot about the what

Unfortunately we can't predict when the state changes will occur with any detailed level of precision as this depends upon individual actors actions i.e. I know bio-printing will eventually become a product and then a commodity component but I don't know who will make this happen or precisely when each of these state changes will occur.

However, there are some special classes of change. For example, I know that any act will evolve from product to commodity (+utility). But, I also know that as it does so, past product companies (suffering from inertia built up during a time of relative peace between product vendors) will be disrupted by new entrants. It'll take about 10-15 years for the change to become obvious and the past vendors to be on their way out. There'll be an explosion of new activities built on top of this commodity (a time of wonder) and a change of practice related to the act (co-evolution). There's an awful lot I can say about the what of this product to commodity (+utility) state change, which we describe as a point of 'war' in the economy.

Fortunately, in this special case there's a very specific weak signal technique which I can use to narrow down the target range of when a 'war' is going to occur. I've provided some results from this technique in figure 3.

Figure 3 - Points of War


(P.S. Green is an unpredictable. Muddy brown is middling. Red are the points of war)

It's through an earlier version of the technique that I knew that compute was moving towards a utility before AWS. It's also how in Canonical in 2008, I knew we had to focus on the co-evolved practices (devops), as well as capturing the cloud market, any new activities building on top and how past vendors had far less time than they realised.

So, for example, I happen to know the 'war' in 'big data' systems is kicking off. I've actually known for quite sometime this was heading our way. This 'war' means we will see utility providers in this area (they already have launched). The 'big data' product vendors (who have inertia) will dismiss these players and declare that the new entrants are useful for development but when it comes to production you will want to talk to them. They'll probably even spread FUD.  However, in about 10-15 years the past vendors will be in serious trouble. I can even tell you that this type of change (known as a punctuated equilibrium) will catch those past vendors out i.e. in 5-10 years those vendors will be crowing about how the new entrants represent less 3-5% of the market but by 10-15 years those new entrants will be 30-50%. If you want (and I felt inclined), I could already give you a list of the dead.

This change will cause an explosion of new activities (i.e. genesis) based upon these standard components in a time of wonder around data. I know that a time of wonder will occur, I can say roughly when but of course I don't know what those new activities are (no-one does). Genesis is unpredictable but at least I can tell you to keep an eye out - new stuff will happen! There will also be new practices developing around the use of such utility services, we'll probably even give it a meme (hopefully not DataDev or DataOps or any other awful combo).

Now, if I understand my value chain then I can scenario plan around fairly predictable patterns and use weak signals to identify when it's likely to happen. I can't avoid the unpredictable (e.g. product to product substitution) any more than I can avoid the need to gamble and experiment in the uncharted space if I'm trying to create something new. But I can ruthlessly exploit the knowable against opponents who can't even see the board. If they could, they'd never be disrupted by anticipatable forms of change (e.g. cloud) because even with the inertia, you could overcome it.

For most companies however, they have little to no situational awareness which means everything bar the obvious existing trends appear to be unknowable and comes as a complete shock. These are my favourite companies to compete against and there's an awful lot to choose from out there.

Happy Hunting.


August 12, 2015

Extracting the BBC Genome: Close-Up. by Feeling Listless



Film Back in 1995, as part of the century of cinema celebrations (of which more in my old post about The Fifth Element) the BBC broadcast a series of short five or ten minute programmes in which prominent people chose and described their favourite film scenes.

Being a student at the time, I was never around when they were broadcast, which was through July then throughout the Autumn and at odd times during the day but have a vivid memory of the episode in which Gale Ann Hurd describes the resuscitation scene from The Abyss and how it's the emotional climax which made it an exceedingly difficult to find a satisfying conclusion.

Sometimes they were succeeded by a broadcast of the film in question.

For years I've wondered exactly which films had been chosen and by who and the other day I realised that there was now a source in existence which could tell me.

A quick search of the BBC Genome and here they are.

Find below a list of the films and the person who chose them.  For ease of use, I've only included that information but this Genome search has all the relevant TX dates should you be interested.

They're generally in broadcast order, although I've changed the chronology whenever the same film's chosen by two people or the same person's chosen two films.  To be honest it doesn't look like they were produced to be broadcast in any particular order anyway and there were a few repeats during the run (which I've left out too).

Someone's also recently uploaded some episodes to YouTube which I've embedded so you can get some idea of what the series was like.

In case you're wondering, someone else has already listed Moviedrome.  That's on the Genome too.

Close-Up (BBC, 1995).

An American in Paris (ballet scene). John Barry (composer).

The Killing Fields. David Puttnam (director).

Casablanca (final scene). Russ Meyer (director).

Les Diaboliques. Denis Healey (politician).



Madonna of the Seven Moons. Carla Lane (writer).

Sunset Boulevard. J G Ballard (writer).

Beauty and the Beast. Janet Street-Porter (television executive).

The Wizard of Oz. John Waters (director).



Brief Encounter. Mary Whitehouse (!).

Safety Last. Mary Whitehouse (twice?).

Rio Bravo. John Carpenter (director).

Metropolis. Ken Russell (Metropolis).

Faster Pussycast Kill Kill. Jonathan Ross (presenter).

On The Waterfront. Maevy Binchy (novelist).

On The Waterfront. Lynda La Plante (author).

White Heat. Michael Mann (director).

East of Eden. Michael Mann (director).

Battleship Potemkin (Odessa steps). Roger Corman (director).



Weekend (Godard). Mike Figgis (director).

City Lights (Chaplin). Richard Attenborough (director and actor)

The Producers. Teresa Gorman (MP).

Victor Victoria. Teresa Gorman (MP).

King Kong. Alex Cox (director).



King Kong. Ray Harryhausen (sfx animator).

Night of the Hunter. Christine Vachon (producer).



A Hard Day's Night. Hanif Kureishi (director).



Gone with the Wind. Diane Abbott (MP).

Gone with the Wind. Julian Clary (comedian).

The Best Years of Our Lives. Richard Fleischer (director).

Up in the World (Wisdom). Nick Park (director).



Henry V (Olivier) (epic battle scene). Michael Winner (director).

My Darling Clementine. John Milius (director).

Return to Paradise. John Milius (director).

Citizen Kane. John Schlesinger (director).

Stagecoach. P D James (crime novelist).

Spellbound (Hitchcock). Robert Rodriguez (director).

Gun Crazy (bank robbery scene). Stephen Woolley (producer).

The Woman in Red. Nicolas Roeg (director).

The Apartment. Susan Seidelman (director).

The Apartment. Volker Schlondorff (director).

The Searchers. Brian Cox (actor).

Three Colours Blue. Brian Cox (actor).

Once Upon A Time in the West (opening scene). Maggie Greenwald (director).

Gypsy. Terence Davies (director).

The Tales of Hoffman George Romero (director).

Pather Panchali. Mike Hodges (director).



The Good, the Bad and the Ugly. Joe Dante (director).

A Touch of Evil. Joe Dante (director).

The Abyss (resuscitation scene). Gale Ann Hurd (producer).

Daddy-O. James Ellroy (writer).

Zero de Conduite. Abraham Polonsky (screenwriter).

The Wages of Fear. Perry Henzell (director).

A Fistful of Dollars. Christopher Frayling (director).

The Silence (Bergman). Jane Birkin (actor).

A Blonde in Love. Ken Loach (director).

Saturday Night Fever (opening scene). John Badham (director).

Urga. Julie Christie (actor).

Twelve O'Clock High. Bob Rafelson (director).

Kes. Kathy Burke (actor).

Jules et Jim. Mike Leigh (director).

Bad Day at Black Rock. Philip French (critic).

Klute. Lizzie Borden (director).

8½. Terry Gilliam (director).

Le Plaisir. Bernardo Bertolucci (director).

A Place in the Sun. Monte Hellmen (director).

The Third Man. Monte Hellmen (director).

The Godfather, Part II. Allen Daviau (cinematographer).

Pickpocket. Paul Schrader (screenwriter and director).

Greed. Robert McKee (screenwriting teacher).

The finale (broadcast on New Year's Eve) had John Landis going to town and choosing "his favourite comic movie moments, featuring the Three Stooges, Buster Keaton and Laurel and Hardy and includingscenes from Annie Hall and Jaws."

Also there's one episode which is simply listed as "Another favourite movie moment." But nothing else.

Brief commentary:  some of the choices are really interesting - Nic Roeg choosing The Woman in Red or Diane Abbott on Gone With The Wind - and it's frustrating not to know what they said.  Plenty of women directors too.  Oh and the youngest is Robert Rodriguez who in 1995 was on the crest of his original burst of fame with El Mariachi and Desperado.


Another Doctor Who Trailer. by Feeling Listless



TV With the ComCon model (which has subsequently received some broadcast play on television) being a bit, yeah, nice, oh he's playing the guitar, here's another one with more guitar playing and a general sense of the Doctor being more traditionally heroic this time. Clara seems far more enamored at least. A few new things:

This:



Which brings to mind the best moment in the whole of Torchwood's Miracle Day:



This:



"Tale as old as time..." Doctor Who does Beauty and the Beast? Animal Kwackers? King of the Cheetah People? The Garm?  Also they're really selling the Maisie stuff.  Notice how she's not wearing the Civvies from the TARDIS publicity shoot here.  Is she in more than one episode?

Overall with the viking helmets and such it feels more like the Matt Smith era in tone and look since its a return to blues and yellows over the autumnal colours of the previous series. It's noticeable there are no killer lines, no jokes and the Doctor's generally very isolated, doesn't really interact with Clara much, apart from that bloody lovely backwards hug towards the end. My expectations are still magnificently low, but this at least looks more like someone trying to make Doctor Who again.


Probably, the best talk ever ... at OSCON. by Simon Wardley

I gave a keynote at OSCON this year .... and no, my talk is not the "Best Talk Ever". It was a real honour to speak, I had two weeks notice to prepare and whilst not as polished as I'd like, it was a reasonably good talk and one that I'm happy with. I covered the usual areas of evolution, management, lack of situational awareness and surge pricing for funerals in between a story of OSCON moving.

A reasonably good talk


I received a number of kind comments, including one which was "When I grow up, I want to speak like Simon Wardley".  OSCON is tough place to present at because there are so many good speakers - folks like Damien Conway, Robert Lefkowitz, Allison Randal, Paul Fenwick etc. They're smart, good communicators and know how to tell a story. I've spent some time on the speaking circuit. I've still to learn my craft properly but I do fine. However, before my talk I watched the earlier keynotes and was stunned by one. 

Keila Banks walked in and literally stole the show. Keila is a future superstar of tech. At the age of 13 ... well, this is incredible. Inspirational, engaging, with purpose and a true story.

When I grow up I want to speak like Keila Banks! You must watch this.

The BEST talk ever



Estimating the number of intelligent civilizations from planet formation rates by Astrobites

Title: On The History and Future of Cosmic Planet Formation
Authors: P. Behroozi, M. Peeples
First Author’s Institution: Space Telescope Science Institute, Baltimore, MD

Humans have long wondered if we are the only form of sentient life in the Universe. For decades, observations through efforts such as SETI have been ongoing in the search for extraterrestrial life (namely, extraterrestrial radio signals). However, these attempts have been frustrated by a lack of observational evidence and a great deal of uncertainty of what signs we should even be looking for, since our search is based on our own anthropocentric view of what extraterrestrial life should look like. As a result, a good portion of our search for extraterrestrial life has been centered around estimating the probability of detectable civilizations.

The Drake equation has been long used for estimating the number of intelligent civilizations capable of radio communications that may be inhabiting our galaxy. This equation involves several variables, and while some variables are reasonably well constrained (ex. the rate of star formation in our galaxy), most are not and we are only able to make a guess for some of these (ex. the fraction of planets that may be able to support life, and the lifetime of intelligent, communicating civilizations that may develop). Since we only have a single data point (i.e. our own species) on which to base our estimates, this leads to tremendous sources of uncertainly and as a result, proposed estimates using the Drake equation are wildly discrepant over dozens of orders of magnitude.

Instead of the Drake equation, the authors in this paper use observational constraints and theoretical simulations estimate the past and future history of planet formation (and from that, eventually estimate the number of potential civilizations in our Universe). This is done by combining galaxy star formation rates with planet formation rates of a star to estimate the number of planets that have formed in the past, and the number that will form in the future. Observationally, this is a viable approach as thousands of exoplanets have been detected by NASA’s Kepler mission, and our understanding of galaxy star formation rates are well constrained. Fig. 1 shows the resulting planet formation history for our Milky Way.

Fig. 1:

Fig. 1: The total number of Earth-like planets and gas giant planets in the Universe, as a function of time since the Big Bang. The blue square indicates the median formation time of each type of planet. The vertical dotted line indicates the formation time of the Solar System, which occurs roughly after 80% of Earth-like planets had already formed and after 50% of all giant planets. The Earth-like planet formation rates are directly proportional to galaxy star formation rates, so this quantity is constrained by the measuring the latter. The two types of planets form at different rates due to their different dependencies on stellar metallicity.

From this estimate of planet formation rates, the authors then use a Bayesian statistical method to constrain the number of intelligent civilizations in the Universe given the formation age of the Earth (see Fig. 2). The authors estimate that there may be 10^9 Earth-like planets and 10^10 giant planets in the Milky Way alone. Additionally, the authors predict that based on current star formation models, The Earth was formed before 92% of similarly sized planets have already been formed or that will form in the future. This suggests there is a fairly large chance that there may be other civilizations in the Universe other than our own. While the task of estimating the number of intelligent civilizations in our Universe is still subject to various uncertainties, these results offer hope that our search for extraterrestrial life may be not be completely unfounded.

Fig. 2:

Fig. 2: This plot shows the probability of there being a certain number of planets with civilizations in the Universe. Each curve represents this probability, with the condition that Earth is the 1st/10th/100th planet in the Universe with a civilization. If the Earth is *not* to the only planet in the Universe to have a civilization, then this drastically increases the probability that there are thousands of civilizations in the Universe.


New Curiosity Self-Portrait by The Planetary Society

Amateur image processor Damia Bouic shares new stunning images from Curiosity—including a "selfie" from a whole new angle.


LightSail Nominated for SmallSat Mission of the Year by The Planetary Society

The Planetary Society’s LightSail spacecraft has been nominated for the American Institute of Aeronautics and Astronautics (AIAA) Small Satellite Technical Committee Mission of the Year 2015 award.


August 11, 2015

My favourite film of 1986. by Feeling Listless



Film One of the great joys of the MARVEL cinematic universe is the post-credit sequence not just because of the added value but also because they're a way for cinema goers to note exactly who else lives in their head space. At each performance of one of these films there will always be people who leave just after the given director's name disappears and they'll always but always survey the auditorium, a questioning look in their eye wondering, "Why are you all still here? Why haven't you left yet?" before turning and leaving us to see either the actual end of the story (Thor: The Dark World) or a preview of what looks like footage from the next release (Ant-Man). The most ironic example of this was the screening of Avengers: Age of Ultron I attended, a film where the producers and director had warned that there wasn't going to be anything after the credits but the entire audience, some thirty of us, stayed in our seats anyway. Just in case.

Unlike most of those, the post-credits sequence on Ferris Bueller's Day Off is pretty difficult to miss, with the duration of the credits absorbed with Ferris's headmaster having to take the school bus and a first floor corridor in Ferris's house appearing just after said vehicle has rolled into the distance.  That makes what he says, "You're still here?  It's over.  Go home.  Go."  Well, yes, Ferris, it's over now, you scamp.  For a while I thought this was the first, but as this Wikipedia page explains, it might have been The Rutles: All You Need Is Cash or The Muppets (depending on your attitude to the former being a theatrical release).  The one for Adventures in Babysitting is especially fun because it ties up one of the film's bits of plot.  But was Ferris the first to speak to the audience directly?  What must that have been like at the time?  Did people just laugh?  Were some of them freaked out?

On video, it was a particularly useful moment because it signalled the end of the Yello track if you didn't happen to be watching the screen.  Ferris is a rare example of an 80s film which doesn't have a soundtrack album because John Hughes didn't think it constituted a coherent collection of songs and didn't think anyone would want to buy them and so the only way to listen to some of the songs back in the VHS age was the simply watch the film.  The Dream Academy's cover of Please, Please, Please Let Me Get What I Want has only just become widely available on their best of album this year and the version at this Spotify link isn't even the instrumental from the film.  So for a while I'd simply have the video on in the background and let the sounds fill the room and when Ferris's voice appeared it reminded me to rewind the tape so I could start all over again.


Subscriptions (feed of everything)


Updated using Planet on 4 September 2015, 05:48 AM