This combines together on one page various news websites and diaries which I like to read.
November 24, 2015
Let’s start with the most important circuit board I’ve milled in the last few days.
It’s the last one I did — and it should have been my first.
But we’re lazy at doing the necessary cutting trials, aren’t we?
There had been flaws in the circuits I had machined. In particular, some contours had not quite been fully cut leaving a little join of copper that I had to break with a scalpel. Also, some cuts were much cleaner than others.
Until I did these experiments I thought that uncut material would be at the ends of a cut, possibly due to a tool deflecting back from the cutting forces. And I imagined that the messier lines were because I was in a hurry and cutting too fast.
The truth was the exact opposite of what I thought.
The square cuts in batches 7 and 8 are cut in the direction of bottom left to top right with the top right ones done first.
Batch 7 had the tracks extended because I was looking to counter the tool deflections, which didn’t exist.
Batch 8 shows what’s really going on. In areas where the cutting is too light the tool can drag along the surface for a distance before it bites.
The pairs in batch 9 are done with two diverging cuts above and two converging cuts below. The upper ones exhibit gaps.
The roughness of the cut edges are all due to the feedrate and spindle speed.
Batch 3 was done at 3000rpm and feedrates of F600 (metres per second?) and is horrible.
Batch 5 is at 8000rpm with F50 in first column, F150 in second column and F450 in fifth column. It gets better the faster you go. Batch 6 extends the increasing feedrates, but was at 16000rpm. Batch 2 is same as batch 5 but also with 16000rpm.
Batches 1 and 4 are likewise F50 on the left stepping up in increments of F100 going to the right and shows that a minimum feedrate is required to make it nice.
Now that I know the parameters for what potentially makes a good cut, I should be able to make a single test piece of copper where it carves out the numbers of the feed and speed conditions in a big array that I can nail to the wall.
Anyways, to make sure I am isolating the tracks properly, I now machine an extra millimeter around every contour. Also, it’s necessary to probe the copper and adjust for it’s not quite flatness (a feature I saw in the chilipeppr system). Neither of these features are available in flatcam, so I have scavenged its gerber-loading code and made my own set of tools.
Here’s a vid of the process on a simple circuit.
Then I got a very ambitious job and I spent the whole of Friday doing a really complex circuit as part of the machine upgrade:
I’ve super-imposed the back and the front sides of this monumental piece that was off the edge of my capabilities. I did it with one gouge, a lot of messed up drill holes, but made it for the last post.
I was too stressed to get any documentation, beyond this one picture. The big holes are for the location pins when flipping it over, so it was a good idea to squeeze one right in the middle along the reflection line.
That’s it for now, until the next circuit job shows up. Or I should get back to some brass machining if I intend to make any truly unwanted Xmas presents for any friends.
Even as recently as Web Summit at the beginning of November I was asked about a Bubble as part of a panel. It has since occurred to me that the problem with using that word to denote possibly overextended valuations is that is suggests a sudden correction in the form of a “popping” or “bursting” of a bubble. That’s pretty much what we experienced at the end of the actual Internet Bubble when many companies disappeared entirely and others were down 90% or more.
So I want to propose a new word: a “bulge.” We are now a decade and a half past the Internet bubble and well into what Carlota Perez calls the deployment phase. That is when bigger changes in society occur as a technology which was installed during an initial frenzy actually gets widely used. Generally this is when more value is created for society and longer lasting companies are built (or existing companies transform themselves using the technology).
It is, however, still possible during the deployment phase for valuations to get ahead of where they should be. And that’s what I would like to call a “bulge” (please feel free to propose a better word). A bulge can come and go relatively quickly but when it goes away the thing itself is still there. So valuations could fluctuate by as much as say 50% but few, if any, companies go away entirely.
Here is a great chart from a post about valuations of SaaS companies that shows a bulge in the SaaS sector around 2014
The chart also shows the prior impact of the 2008 financial crisis when valuations fell behind in what might be called a “dent.”
I suspect that similar multiple charts can be drawn for other groups of companies and that we had a bulge in public valuations that coincided with the rapid addition of many private companies to the $1B+ valuation club as can be seen on this chart
I am now quite convinced that “bulges” and “dents” can best be understood in terms of multiple expansion and contraction. When it comes to an expansion phase there can be a positive feedback loop between private and public company valuations as I previously wrote about.
Public markets will eventually correct and do so more rapidly since they are all common stock. Private transactions by contrast allow for preferred structures (such as ratchets and warrants) that often keep the headline price “sticky”. And that’s exactly what we have seen. Public market valuations have already come down substantially but private ones are lagging behind. We see that most clearly when a company such as Square goes public at a valuation substantially below its last private round.
The bulge is coming to an end and we may even wind up with a dent. This is important for investors and entrepreneurs but it won’t have nearly the same dramatic impact as a bubble as companies can and will carry on (see Square again).
Now let’s see if I can get bulge and dent to stick or if someone proposes a better alternative.
Authors: Brent Miszalski, P. A. Woudt, S. P. Littlefair et al.
First author’s institution: South African Astronomical Observatory
Status: Accepted for publication in MNRAS
On the 16th of November in 483 CE, astronomers in China recorded the appearance of “a guest star east of Shen, as large as a peck measure, and like a fuzzy star”. The new celestial light shone brightly for just under a month, then faded to nothing. Over 1500 years later, the authors of today’s paper may have found the source.
The suspect is a nebula known as Te 11, a cloud of expanding, glowing gas around half a light-year across at its widest point. Te 11 was originally thought to be a planetary nebula. These are, confusingly, nothing to do with planets, but are instead made out of material thrown off a red giant star as it shrinks into a white dwarf.
But although visually Te 11 looks like a planetary nebula, many of its characteristics don’t quite fit. It’s moving too slowly, and has much less mass than other, confirmed examples.
To search for alternative ways in which the nebula could have formed, the authors obtained a light curve, shown in the figure below, and spectroscopy of the object lurking in Te 11’s centre. They found a white dwarf, just as the planetary nebula hypotheses predicted. But it wasn’t alone.
The white dwarf is accompanied by an M dwarf star, so close together that they orbit around their centre of mass in less than three hours. At such close proximity, the gravity of the white dwarf draws material off its companion, forming a ring of gas known as an accretion disc. The material in the disc then gradually spirals down onto the white dwarf.
In a number of these systems, the disc becomes unstable every few years, probably due to a change in the viscosity of the gas caused by a rise in temperature (no one is exactly sure how it works). The material falling onto the white dwarf briefly turns from a gentile trickle into a raging torrent, releasing huge amounts of gravitational energy as light. The regular mini-explosions give the systems their name: Dwarf novae, after the larger cosmic explosions called novae and supernovae.
The authors’ observations of Te 11 had been prompted by five novae-like events in the last ten years, spotted by the Catalina Real-Time Transit Survey. The new observations both confirmed that the system was a dwarf nova, and provided exact measurements of some of the characteristics of the two stars, such as their masses and radii.
Te11 hosts an was unusually massive white dwarf, 1.18 times the mass of the Sun (a typical white dwarf is around 0.6). This meant that, as well as dwarf novae, bigger classical novae could also occur. Classical novae take place when the mass building up on the white dwarf becomes so dense that the hydrogen begins to fuse, releasing huge amounts of energy and blowing apart the (newly added) outer layers of the star.
Such a high mass white dwarf means that a novae could reasonably have occurred recently, within a time scale of hundreds of years. The material from the novae would have slammed into the unusually dense interstellar medium in the area, creating the Te 11 nebula. The authors postulate that this huge explosion was the source of the “fuzzy star” spotted in 483 CE.
Miszalski et al. finish by suggesting that more novae could have occurred since then, and high resolution imagining might reveal shells of material nestled inside the nebula. Observing these would give unprecedented insight into the physics of novae and the structures they leave behind.
NASA placed its first official order for a SpaceX Crew Dragon to carry astronauts to the International Space Station, the agency announced Friday.
A project to digitize more than 90,000 images taken by NASA’s five Surveyor spacecraft in the 1960s has revealed early hints of never-before-seen treasures captured by America’s first robotic lunar landers.
A panel of outside experts reviewed the design of the Mars 2020 rover's color cameras, and approved the progress of Mastcam-Z. It still exists only as an idea in the cloud, but it's one significant step closer to being sent to Mars.
November 23, 2015
Games Oh well, bloody hell. The Hitchhikers Guide To The Galaxy text adventure was released across multiple platforms back in 1984 and the key element was that large chunks of the text were written by Douglas Adams and a proportion of that was whole new material. But he also guided the making of the game, helped with some of the puzzles and it's as close to being in the novel or radio series as its possible to be.
There have been multiple versions of the game available since then produced as interactive online experiences (all linked from the Wikipedia page here) but up until now, it's been impossible, as far as I know, to know exactly what Douglas Adams's input was on the game. Until now.
As part of his project to scan everything in existence, Jason Scott of the Internet Archive was given access to Infocom's archive which includes folders containing material pertaining to each of the games which includes The Hitchhiker's Guide.
Which means at this link there's a file containing all of the material pertaining to the making of the game, including Douglas's letters and notes on the game, maps, interviews, adverts, articles, a trove. Wow.
November 22, 2015
Managed to find some time to flick through my music and produce a new mix.
I’ve made it available for download so you can stick it on your phone for listening on that train journey // cold walk home =)
November 21, 2015
TV Bye then Clara. Killing off the companions in Doctor Who is a relatively rare occurrence. The classic series brought an end to Katarina, Sara Kingdom and Adric and some people are less than convinced that the former two even really are companions anyway. Technically none of the TARDIS team members have actually died in the revival, since Amy and Rory were really only zapped back in time by a weeping angel, trapped perhaps but still breathing. Donna’s knocking around, albeit having had her memories of her time with the Doctor erased. The spin-offs have been a fair old massacre but even then plenty of the Time Lord’s friends were resurrected by the climax of the Eighth Doctor novels even if he couldn't remember who half of them were.
So when it was hinted this would be Clara’s fate, I didn’t really expect it to be true and mores to the point still don’t. With two episodes left of the season and so many unanswered questions, even with her broken body on the cobbles of Doctor Who’s version of Diagon Alley, even with Murray Gold’s utterly superb score with its many callback to the Clara’s theme, the teary performance by Jenna Fucking Coleman, even with Sarah Dollard’s debut script absolutely capturing her relationship with the Doctor and understanding his potential reaction to her passing, I don’t think she’s gone. As Dubya said, “There's an old saying in Tennessee — I know it's in Texas, probably in Tennessee — that says, fool me once, shame on — shame on you. Fool me — you can't get fooled again.” Which was true of the dvd release of The Underwater Menace and feels true again. Not Clara, not now.
Perhaps that’s why I didn’t cry. Social media’s awash with anguish and yet, although the corners of my mouth were certainly pointing downwards and I found myself utterly thrilled with Capaldi performance notably the moment when presented a grief I don’t think we’ve seen from him before, these eyes, these eyes which sobbed at the sight of Leo falling off a raft in Titanic and Kirk’s “Oh my…” in Star Trek: Generations and can’t even get to the end of Who's own Journey’s End (Sarah Jane and the leeveeerrrr....) without stinging towards blindness remained resolutely dry. Perhaps after having been unable to cope with real world events lately, my brain can’t process the poignancy of a fictional event and when I watch it again in a different light, if we have confirmation of her temporarilty, I’ll be able to process these moments differently.
Part of my lack of ability to acknowledge that any of this is true is because it doesn’t fit. Given Clara’s origins, the fractured pieces of her across time, this impossible girl saving the Doctor here, there and everywhere still (as confirmed by Jac Rayner’s superb DWM comic Blood and Ice in with Clara meets one of these facets and has to deal with the consequences of her existence), to have her die in a situation which has no thematic connection to any of it doesn’t fit. It’s also strange that none of this was even acknowledged, no mention from the Doctor about her being “the impossible girl”. For her to be snuffed out by these previously unmentioned death method with a complicated set of rules lacks dramatic unity. Chekhov’s rifle has not gone off. It remains hanging on the wall.
Yet, professional publications are treating this as her egress so perhaps I’m just in denial. It is true that companion exits, like regenerations, do tend to be more like narrative happenstance than anything planned ahead. Adric could just as easy as fallen off the roof of Cranleigh Hall as brought about an extinction event. But in the revival there’s more often than not a question answered. When Rose is trapped in the alt.universe, it’s the culmination of a collective story arc which included the introduction of that universe earlier in second series, watching the death of her real Dad in Father’s Day. Martha Jones left acknowledging she didn’t need the Doctor any more. The horror of Donna’s loss was that she was effectively being rebooted back to becoming the monster from The Runaway Bride.
Plus and it’s let’s not pass on before acknowledging this, the cover of this month’s committee invective and pages twenty-six and twenty-seven which could just as well be opposite shots in a particular scene. Now, it’s true that I haven’t read any of the actual text yet, spoilerphobe that I am now, the BBC’s press office not really helping in this situation, but I find it deeply unlikely that those photos would simply exist for the purposes of a photo shoot, a way of underscoring an undoubtedly clever strap line on the cover. Unless the text does say this and I now look like a fool and not for the first time. Unless she’s one of the aforementioned facets and the bookend to the story of Clara Oswald is for Jenna Coleman to be playing a different version of her, just as she did at the beginning in Asylum of the Daleks and we’ll have joy of hearing her American accent.
Having burned through six paragraphs explaining why I don’t think Clara Oswald is really dead (and would Moffat really let someone else write her out?), what about the rest of Face The Raven? Well … yeah. I like it. Did I think it was dramatically strong enough to be Clara's final episode if indeed it really is? No. The notion of a safe haven for aliens hidden in the middle of London is so good it’s surprising one of the spin-off series hasn’t already thought of it and hopefully it’ll be returned to either there or in the television version. The process of finding it was also fun, reminding me of a good episode of Elementary (which I’m currently bingeing), the idea of trap streets being just the sort of thing Joan might remind Sherlock of (or at least which he’d pretend he didn’t know about so that she can feel like she’s making a contribution) (and I’m not at all convinced this isn’t what happened here) (the Doctor can be strangely ignorant at times).
For all her part in Clara’s downfall, Ashildr or Me still feels like a narrative anomaly, a character waiting to actualise. Bolting on these extra powers, the result of a different pact devalues the already tenuous final moments of The Woman Who Lived. There’s a general sense of wanting to simply furnish this character things to do having given Maisie Williams a comic-con pleasing place in the series, rather than a well planned out character arc. Rather like Danny Pink last year, it’s the show making a misstep in assuming the audience finds a character more interesting than we do, like trying to manufacture a River Song or Strax, incidental figures who we demanded to return even though that was never planned. Williams seems more assured in her performance on this occasion but I’m still not convinced. Where’s Rufus Hound?
To an extent this whole construct is filling a gap left by Torchwood, and has even absorbed some its methods, the intrusion of retcon into the main series something of a surprise and it’s pleasing that their properties are entirely consistent with how they’re used in that series. Is this another facet of this series theme of not being about the tenth anniversary even though that’s exactly what it is? The structure of the episode almost mimics the shift between showrunner eras, or at least the public and critical perception of the them, opening in the very urban landscape of the RTD era with its high rise estates and in the tattoo counting down contemporary visual, giving way, as the characters step through the “portal” to the mythological references and imagery of the Moffat era (even if both eras have contained elements of both).
Rigby’s generally being set up as a potential new companion here and I’d welcome it, a refreshing change to have a young chap in the TARDIS although it’s unlikely he’d leave his new family. The brief moments the Fourth Doctor and Adric spent together, the magician and his apprentice, are an under-appreciated seam in the mythology and you could absolutely imagine a similar dynamic with the Capaldi version as he is now, generally kinder, more resonantly caring about human life but retaining this eccentricity. Perhaps there’s a untold version in which the Doctor actually takes the whole family around the universe. It’s interesting, although more like a budgetary issue, that we weren’t properly introduced to his wife. In other series there’d be a particular casting reason for that …
For all my scepticism of the outcome, there's no denying the dramatic power of the final scenes especially the Doctor's wrath in the face of this creation of his which has gone oh so bad. A return to the dangerous figure of the previous series perhaps, one who uses the Daleks as a threat but whereas in season eight you might imagine him actually going through with it, that man has (thankfully) gone. As Clara says, it would end as soon as he saw the pain of a child. If nothing else, and the elegance of the writing and understanding of the characters really shines here, it's Clara seeing him again, the man who stood on Christmas for a thousand years, defending the generations in a village looking out from underneath those eye brows. Not that I'm not unconvinced we'll have some major coup in the final two episodes and David, Matt or even Paul aren't going to return for the final big push.
All of which said, we are at the start of the three episode finale arc, the U in SOD U LOTT if you will, so actually how well this works won’t really be revealed until the beginning of December. The manic ratings desperation of the BBC PR synopses pretty much says what Doctor’s journey will from now onwards and who’s apparently been in touch with Me, or Mayor Me and asked for her aid in capturing the Doctor, all of which makes sense dramatically and Chekhov would be pleased with. If Face The Raven underscores anything so far, it’s how strong this series has been and how even if it’s not always been brilliant, it’s ideas never entirely in focus, it’s not been anything less than entertaining and something to look forward too. Which is pretty much what it should be. Really.
At the 47th Division of Planetary Systems meeting, many presentations touched on some of the most contentious and poorly known aspects of how planets form.
If you were to download the entire catalog of photos taken at Saturn to date by Cassini and then animate them like a flipbook, how long would it take to watch them all pass by? The Wall Street Journal's Visual Correspondent Jon Keegan has your answer: nearly four hours.
In the process this automation also expanded its reach, there are limitations on how much any human controller can effectively control. So, whilst I agree that Uber is not disrupting taxis per se, it certainly seems to be disrupting the idea of a human taxi controller by first turning this interaction into volume operations of good enough. However, what's interesting in this topic is when people talk about the potential of Uber to disrupt the automotive industry.
Taxis in this world will become a thing of the past except, in a sense, most of the cars are acting like taxis with automated controllers and automated drivers. We don't own these cars, we just subscribe to our service and pay for what we use. By 2045, well the scenarios get even more outlandish. But who will own this new world of vehicles? Will it be Uber or Google or Apple or Ford or even GM?
November 20, 2015
Charlie here. I'm off to Sledge-Lit in Derby tomorrow, a one day SF convention at Derby Quad. In the meantime, I'd like to introduce our latest guest blogger: Madeline Ashby.
Madeline is a science fiction writer, columnist, and futurist living in Toronto. She is the author of the Machine Dynasty series from Angry Robot Books, as wells as the forthcoming novel Company Town from Tor. Recently, she co-edited Licence Expired: the Unauthorized James Bond, an anthology for ChiZine Publications. She has worked with organizations like Intel Labs, the Institute for the Future, SciFutures, the Atlantic Council, Nesta, and others. You can find her on Twitter @MadelineAshby or at madelineashby.com.
On one of my visits to the Computer History Museum – and by the way this is an absolute must-visit place if you are ever in the San Francisco bay area – I saw an early Google server rack circa 1999 in the exhibits.
Not too fancy, right? Maybe even … a little janky? This is building a computer the Google way:
Instead of buying whatever pre-built rack-mount servers Dell, Compaq, and IBM were selling at the time, Google opted to hand-build their server infrastructure themselves. The sagging motherboards and hard drives are literally propped in place on handmade plywood platforms. The power switches are crudely mounted in front, the network cables draped along each side. The poorly routed power connectors snake their way back to generic PC power supplies in the rear.
Some people might look at these early Google servers and see an amateurish fire hazard. Not me. I see a prescient understanding of how inexpensive commodity hardware would shape today's internet. I felt right at home when I saw this server; it's exactly what I would have done in the same circumstances. This rack is a perfect example of the commodity x86 market D.I.Y. ethic at work: if you want it done right, and done inexpensively, you build it yourself.
- Supermicro P6SMB motherboard
- 256MB PC100 memory
- Pentium II 400 CPU
- IBM Deskstar 22GB hard drives (×2)
- Intel 10/100 network card
Inspired by Google and their use of cheap, commodity x86 hardware to scale on top of the open source Linux OS, I also built our own servers. When I get stressed out, when I feel the world weighing heavy on my shoulders and I don't know where to turn … I build servers. It's therapeutic.
I like to give servers a little pep talk while I build them. "Who's the best server! Who's the fastest server!"— Jeff Atwood (@codinghorror) November 16, 2015
Don't judge me, man.
But more seriously, with the release of Intel's latest Skylake architecture, it's finally time to upgrade our 2013 era Discourse servers to the latest and greatest, something reflective of 2016 – which means building even more servers.
Discourse runs on a Ruby stack and one thing we learned early on is that Ruby demands exceptional single threaded performance, aka, a CPU running as fast as possible. Throwing umptazillion CPU cores at Ruby doesn't buy you a whole lot other than being able to handle more requests at the same time. Which is nice, but doesn't get you speed per se. Someone made a helpful technical video to illustrate exactly how this all works:
So, good news! Although PC performance has been incremental at best in the last 5 years, between Haswell and Skylake, Intel managed to deliver a respectable per-thread performance bump. Since we are upgrading our servers from Ivy Bridge (very similar to the i7-3770k), the generation before Haswell, I'd expect a solid 33% performance improvement at minimum.
Even worse, the more cores they pack on a single chip, the slower they all go. From Intel's current Xeon E5 lineup:
- E5-1680 → 8 cores, 3.2 Ghz
- E5-1650 → 6 cores, 3.5 Ghz
- E5-1630 → 4 cores, 3.7 Ghz
Sad, isn't it? Which brings me to the following build for our core web tiers, which optimizes for "lots of inexpensive, fast boxes"
Xeon E3-1280 V2 Ivy Bridge 3.6 Ghz / 4.0 Ghz quad-core ($640)
SuperMicro X9SCM-F-O mobo ($190)
32 GB DDR3-1600 ECC ($292)
SC111LT-330CB 1U chassis ($200)
Samsung 830 512GB SSD ×2 ($1080)
1U Heatsink ($25)
i7-6700k Skylake 4.0 Ghz / 4.2 Ghz quad-core ($370)
SuperMicro X11SSZ-QF-O mobo ($230)
64 GB DDR4-2133 ($520)
CSE-111LT-330CB 1U chassis ($215)
Samsung 850 Pro 1TB SSD ×2 ($886)
1U Heatsink ($20)
|31w idle, 87w BurnP6 load||14w idle, 81w BurnP6 load|
So, about 10% cheaper than what we spent in 2013, with 2× the memory, 2× the storage (probably 50-100% faster too), and at least ~33% faster CPU. With lower power draw, to boot! Pretty good. Pretty, pretty, pretty, pretty good.
(Note that the memory bump is only possible thanks to Intel finally relaxing their iron fist of maximum allowed RAM at the low end; that's new to the Skylake generation.)
One thing is conspicuously missing in our 2016 build: Xeons, and ECC Ram. In my defense, this isn't intentional – we wanted the fastest per-thread performance and no Intel Xeon, either currently available or announced, goes to 4.0 GHz with Skylake. Paying half the price for a CPU with better per-thread performance than any Xeon, well, I'm not going to kid you, that's kind of a nice perk too.
Error-correcting code memory (ECC memory) is a type of computer data storage that can detect and correct the most common kinds of internal data corruption. ECC memory is used in most computers where data corruption cannot be tolerated under any circumstances, such as for scientific or financial computing.
Typically, ECC memory maintains a memory system immune to single-bit errors: the data that is read from each word is always the same as the data that had been written to it, even if one or more bits actually stored have been flipped to the wrong state. Most non-ECC memory cannot detect errors although some non-ECC memory with parity support allows detection but not correction.
It's received wisdom in the sysadmin community that you always build servers with ECC RAM because, well, you build servers to be reliable, right? Why would anyone intentionally build a server that isn't reliable? Are you crazy, man? Well, looking at that cobbled together Google 1999 server rack, which also utterly lacked any form of ECC RAM, I'm inclined to think that reliability measured by "lots of redundant boxes" is more worthwhile and easier to achieve than the platonic ideal of making every individual server bulletproof.
Being the type of guy who likes to question stuff… I began to question. Why is it that ECC is so essential anyway? If ECC was so important, so critical to the reliable function of computers, why isn't it built in to every desktop, laptop, and smartphone in the world by now? Why is it optional? This smells awfully… enterprisey to me.
Now, before everyone stops reading and I get permanently branded as "that crazy guy who hates ECC", I think ECC RAM is fine:
- The cost difference between ECC and not-ECC is minimal these days.
- The performance difference between ECC and not-ECC is minimal these days.
- Even if ECC only protects you from rare 1% hardware error cases that you may never hit until you literally build hundreds or thousands of servers, it's cheap insurance.
I am not anti-insurance, nor am I anti-ECC. But I do seriously question whether ECC is as operationally critical as we have been led to believe, and I think the data shows modern, non-ECC RAM is already extremely reliable.
First, let's look at the Puget Systems reliability stats. These guys build lots of commodity x86 gamer PCs, burn them in, and ship them. They helpfully track statistics on how many parts fail either from burn-in or later in customer use. Go ahead and read through the stats.
For the last two years, CPU reliability has dramatically improved. What is interesting is that this lines up with the launch of the Intel Haswell CPUs which was when the CPU voltage regulation was moved from the motherboard to the CPU itself. At the time we theorized that this should raise CPU failure rates (since there are more components on the CPU to break) but the data shows that it has actually increased reliability instead.
Even though DDR4 is very new, reliability so far has been excellent. Where DDR3 desktop RAM had an overall failure rate in 2014 of ~0.6%, DDR4 desktop RAM had absolutely no failures.
SSD reliability has dramatically improved recently. This year Samsung and Intel SSDs only had a 0.2% overall failure rate compared to 0.8% in 2013.
Modern commodity computer parts from reputable vendors are amazingly reliable. And their trends show from 2012 onward essential PC parts have gotten more reliable, not less. (I can also vouch for the improvement in SSD reliability as we have had zero server SSD failures in 3 years across our 12 servers with 24+ drives, whereas in 2011 I was writing about the Hot/Crazy SSD Scale.) And doesn't this make sense from a financial standpoint? How does it benefit you as a company to ship unreliable parts? That's money right out of your pocket and the reseller's pocket, plus time spent dealing with returns.
We had a, uh, "spirited" discussion about this internally on our private Discourse instance.
This is not a new debate by any means, but I was frustrated by the lack of data out there. In particular, I'm really questioning the difference between "soft" and "hard" memory errors:
But what is the nature of those errors? Are they soft errors – as is commonly believed – where a stray Alpha particle flips a bit? Or are they hard errors, where a bit gets stuck?
I absolutely believe that hard errors are reasonably common. RAM DIMMS can have bugs, or the chips on the DIMM can fail, or there's a design flaw in circuitry on the DIMM that only manifests in certain corner cases or under extreme loads. I've seen it plenty. But a soft error where a bit of memory randomly flips?
There are two types of soft errors, chip-level soft error and system-level soft error. Chip-level soft errors occur when the radioactive atoms in the chip's material decay and release alpha particles into the chip. Because an alpha particle contains a positive charge and kinetic energy, the particle can hit a memory cell and cause the cell to change state to a different value. The atomic reaction is so tiny that it does not damage the actual structure of the chip.
Outside of airplanes and spacecraft, I have a difficult time believing that soft errors happen with any frequency, otherwise most of the computing devices on the planet would be crashing left and right. I deeply distrust the anecdotal voodoo behind "but one of your computer's memory bits could flip, you'd never know, and corrupted data would be written!" It'd be one thing if we observed this regularly, but I've been unhealthily obsessed with computers since birth and I have never found random memory corruption to be a real, actual problem on any computers I have either owned or had access to.
But who gives a damn what I think. What does the data say?
A 2007 study found that the observed soft error rate in live servers was two orders of magnitude lower than previously predicted:
Our preliminary result suggests that the memory soft error rate in two real production systems (a rack-mounted server environment and a desktop PC environment) is much lower than what the previous studies concluded. Particularly in the server environment, with high probability, the soft error rate is at least two orders of magnitude lower than those reported previously. We discuss several potential causes for this result.
A 2009 study on Google's server farm notes that soft errors were difficult to find:
We provide strong evidence that memory errors are dominated by hard errors, rather than soft errors, which previous work suspects to be the dominant error mode.
Yet another large scale study from 2012 discovered that RAM errors were dominated by permanent failure modes typical of hard errors:
Our study has several main findings. First, we find that approximately 70% of DRAM faults are recurring (e.g., permanent) faults, while only 30% are transient faults. Second, we find that large multi-bit faults, such as faults that affects an entire row, column, or bank, constitute over 40% of all DRAM faults. Third, we find that almost 5% of DRAM failures affect board-level circuitry such as data (DQ) or strobe (DQS) wires. Finally, we find that chipkill functionality reduced the system failure rate from DRAM faults by 36x.
In the end, we decided the non-ECC RAM risk was acceptable for every tier of service except our databases. Which is kind of a bummer since higher end Skylake Xeons got pushed back to the big Purley platform upgrade in 2017. Regardless, we burn in every server we build with a complete run of memtestx86 and overnight prime95/mprime, and you should too. There's one whirring away through endless memory tests right behind me as I write this.
I find it very, very suspicious that ECC – if it is so critical to preventing these random, memory corrupting bit flips – has not already been built into every type of RAM that we ship in the ubiquitous computing devices all around the world as a cost of doing business. But I am by no means opposed to paying a small insurance premium for server farms, either. You'll have to look at the data and decide for yourself. Mostly I wanted to collect all this information in one place so people who are also evaluating the cost/benefit of ECC RAM for themselves can read the studies and decide what they want to do.
Please feel free to leave comments if you have other studies to cite, or significant measured data to share.
|[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.|
Authors: L.M. Howes et al.
First author’s institution: Research School of Astronomy & Astrophysics, Australian National University
Status: Published in Nature journal
Every way you look in the night sky, you see twinkles of ancient light streaming in from the forgotten past. Our observable Universe is teeming with mind-bogglingly large number of stars, on the order of ~1022. It is incredible to think that roughly 13 billion years ago, there was none. When the largest dark matter overdensities collapsed about one-one hundredth (200 million years) of the Universe’s age, they eventually grew to become the inner regions (or “bulges“) of galaxies by attracting matter around them. Within these murky nebulous regions, the very first stars burst gloriously into existence.
What did the first stars look like? We have never seen one before, but the early Universe left a spattering of clues for us. We think the first stars should be metal-free, because the primordial Universe contained only the simple elements hydrogen and helium. Call us astronomers failed chemists — metals in the astronomy world means any elements heavier than helium, such as lithium, iron, and carbon. Metals are produced in stellar cores through fusion reactions, which are then disposed into the galaxies through supernovae. New stars that form later will then contain some of these recycled metals. As a galaxy ages, its environment becomes more and more chemically enriched from supernovae events. Therefore, the more metal a star contains, the later it forms and the younger it is. The amount of metal in a star is known as its “metallicity”, the logarithmic ratio of the number of metal atoms to that of hydrogen minus the Sun’s metallicity (defined to be 1). It is commonly written as [Z/H] where Z refers to a certain species of metal; most of the time, Z is iron (symbol: Fe). Young stars rich in metals have positive [Fe/H] while old metal-poor stars typically have negative [Fe/H].
Senior Citizens in the Milky Way Bulge…
The Milky Way bulge is thought to be swarming with immediate descendents of the first stars. However, it is very difficult to detect them due to dust and crowding from other stars. Using the SkyMapper telescope in Australia, the authors successfully detected 500 metal-poor bulge stars, making this survey the first of its kind.
Among the metal-poor bulge stars, the authors re-observed 23 most metal-poor ones to determine their chemical composition. These 23 stars have [Fe/H] <= -2.3, roughly similar to most halo stars. Despite similar metallicities, these bulge stars formed much much earlier than their halo counterpart. Skeptics might be suspicious of the bulge membership of these stars, since halo stars can sometimes pass by the bulge as interlopers. The authors measured the orbits and velocities of their stars and established that half of them have tight bound orbits around the bulge (including their most metal-poor star) and so confirming that at least half of their 23 stars are real members of the bulge.
…with a distaste for Carbon
The authors compared the chemical composition of their stars with halo stars at the same metallicities to look for possible differences. For most elements, the metal-poor bulge stars are very similar to their halo counterpart. Carbon, however, begs to differ. Many metal-poor stars have abnormally high abundance of carbon compared to iron ([C/Fe] > 1) and the number of stars that are carbon-enchanced increases with decreasing metallicities. Don’t ask why yet — astronomers are still scratching their hands (and working hard for possible explanations). These stars are appropriately named “Carbon-Enhanced-Metal-Poor stars” or CEMP stars. The metal-poor stars in the Milky Way halo are also carbon-enhanced; the metal-poor bulge stars aren’t. This is a double surprise because theory also predicts that we should find more carbon-enhanced stars towards the bulge. Figure 1 shows the carbon abundance vs. metallicity of these bulge stars.
Among the 23 stars, the authors uncovered a star with an unprecedented metallicity, one that is at least ten times more iron-poor than other metal-poor bulge stars. This star renews the record as the most metal-poor and oldest bulge star we know today. The authors also did not detect any carbon in the star’s spectrum, achieving only an upper limit (the red arrow in Figure 1). By comparing the chemical composition of the star with synthetic yields from different types of supernovae, as shown in Figure 2, the authors suggested that it could have arise from a hypernova, a supernova that is ten times more powerful.
There is still a lot of room for discovery as the authors only covered one-third of the Milky Way bulge. With a larger and deeper survey, we should be able to find more metal-poor stars in the Milky Way bulge to help propel our currently very limited understanding of these stars and the first stars.
Two JAXA mission updates: Akatsuki Venus orbit entry and PROCYON Earth flyby coming up! by The Planetary Society
Akatsuki is finally approaching its second attempt to enter Venus orbit, on December 7; let's all wish JAXA the best of luck! And PROCYON, whose ion engines have failed, is still an otherwise perfectly functional spacecraft that is taking photos of Earth and the Moon as it approaches for a flyby.
United Launch Alliance plans to include a CubeSat carrier on nearly every one of its Atlas V and Vulcan rockets starting in 2017, the company announced today.
November 19, 2015
An Imagined Museum: works from the Centre Pompidou, Tate and MMK collections at Tate Liverpool. by Feeling Listless
Art All art is about memory. Actually let me clarify this grandiose statement, because everything in life is about memory. Walking down the street is “about memory” since we’re creating a memory of walking down the street for ourself and others, especially if we trip over a flagstone or absentmindedly walk into a bollard. It’s clearer to say that all art is about the representation of memory. All of it. From Jackson Pollock’s drip paintings which records the process of the period in which he made the painting to the representations of horses in the Chauvet caves of Southern France in which the painter attempts to capture their memory of the movement of the beasts to Michael Craig-Martin’s An Oak Tree in which we’re confronted with a beaker of water sat on top of a glass shelf which represents our memory of what an oak tree looks like.
This notion is at the heart of Tate Liverpool’s new exhibition, An Imagined Museum: works from the Centre Pompidou, Tate and MMK collections which pulls together sixty works dating from 1945 onwards as a kind of “imaginary museum” within a supposed scenario, inspired by Ray Bradbury’s Fahrenheit 451, that the works are on the edge of vanishing, disappearing from humanity’s cultural assets so that they’ll then only exist the memories of those lucky enough to see them. The show culminates in a two day performance event on 20-21st February when the objects will actually be removed from the space to be replaced by volunteers who’ll then be called upon to represent the works, and describe their memories of them to visitors.
When a friend first described the idea of the exhibition to me, it’s fair to say my reaction was emotional. There may have been tears. I remembered the scene from the film version of Bradbury’s book in which members of the commune are shown speaking the words of the texts they’ve been tasked with memorising, a library of human beings walking in and out of each, their own brains vulnerable recording devices containing the single surviving copy of these great works of literature. I thought about what it would be like to see a live version of that within a gallery space and how tenuous and fragile the task will be when it’s about a person’s perceptions of an object and the impossible responsibility of having to do it justice.
In the event, if the exhibition doesn’t quite live up to the version that existed in my head, which I imagined, it’s because the works selected are not necessarily from artists for which I’m a huge fan. We’ve discussed Warhol at length before, and although there’s plenty of conceptual art which I find appealing (and Felix Gonzales-Torres returns to Tate here with a similar idea to his clocks but with bulbs this time) I find myself mired in practicality when confronted with plenty of the work here, notably Marcel Duchamp or something like Edward Krasinski’s Ostrich egg. The concept for the exhibition could equally have been applied to any group of objects and I’d be lying if I didn’t say that I wished there were more representational works.
Nevertheless, there are plenty of good surprises many of which key into the notional of art as memory. Rachel Whiteread’s resin imprint of a bath. Allan McCollum’s Plaster Surrogates, ceramic and plaster recreations of empty photo frames of the kind you’d find on sale in Rennies on Bold Street, potentially influenced by Paul Almasy’s also included famous photograph of the Louvre in during the Viche occupation in 1942 in which empty frames represented artworks removed and secured from Nazi plunder. Lawrence Werner’s STONES FOUND AND BROKEN SOMETIME IN THE FUTURE which writes that statement across a wall in English and German with the idea that a sculpture is created from it within the visitor’s own memory.
Although Chris Marker’s La Jetee also makes an appearance, the highlight of the exhibition is undoubtedly Dan Graham’s Present Continuous Pasts which I’m unable to talk about without spoiling the very thing that makes it an unforgettable experience and one of the most exciting pieces of contemporary art I’ve ever seen. Which is odd considering it’s as old as I am, produced in 1974, something which is odd only if you appreciate what it contains. Watch this video and read this explanation if you want to spoil the mystery but know that I spend a good ten minutes inside the room it contains walking about, experimenting, giggling and think of the Star Trek episode it reminded me of. If I didn’t work weekends, it’s this piece I’d certainly volunteer to replace in February.
Fittingly, Tate Liverpool hasn’t produced a traditional catalogue for An Imagined Museum which means that for the most part it will only exist in the memory of visitors, those chosen to be able to describe specific works and whatever material is distributed in the media. There is however a permanent record: the artist Dora Garcia has been commissioned to produce a newspaper, given away free in the space, which provides an inventory of the works on display along with texts rumination on the themes of the show and the fragile nature of artworks. Which does do the same job as a catalogue, I suppose, but feels more ephemeral because it will presumably end distribution when the show closes or at least continues its tour to the other venues. After that then.
Exhibition runs from 20 November 2015 – 14 February 2016. Free for Tate Members. Adult £8.80 (without donation £8). Concession £6.60 (without donation £6) (press day attended).
Thursday 19th November 2015 (1:00AM)
Please file this under Nicholas-is-a-smart-arse. :-)
In my experience, "what is the meaning of life?" is viewed as a difficult question. Nothing could be further from the truth, so let's just get to the answer shall we?
We're simply asking in the wrong way. Instead of wondering what something is, we can flip the question around and ask what its opposite isn't. Ergo, honestly listing the reasons for not killing yourself tells you what gives meaning to your life.
Try it for yourself!
Obviously, the answer is different for each of us and changes over time. But that's essentially it: asking the meaning of life is solved with a glib philosopher's dinner party trick.
Didn't I tell you to file this under Nicholas-is-a-smart-arse?!?!
"Wait a minute..." I hear you ask, "there must be more to it than that! What about deep stuff like God?"
Yes, what about God?
I am an atheist and find it hard to discuss "God" seriously. To give you a flavour of what I mean (and I'm certainly not trying to be disrespectful here) when discussing religion please replace "God" with "flying spaghetti monster" and "heaven" with "Sugarcandy Mountain". If the result starts to sound silly, then you have a sense of how all religious discussion sounds to me.
Given that I'm a smart-arse, perhaps I should top the meaning of life with proof that there is no god.
Actually, I don't have to because the burden of proof is not my responsibility ~ it's religious people who are asserting something (that there is a god).
Let's play another word replacement game to illustrate what I mean: where a religious person might say "God" let's say "a teapot orbiting the Sun" instead. As a result the assertion becomes "I believe there's a teapot orbiting the Sun" to which I might reply, "oh no there isn't". The response, "OK smart-arse, prove there is no teapot orbiting the Sun" shows how bonkers and untenable such a position is. Any reasonable person would expect the person making such assertions to be the one to prove there is a teapot orbiting the Sun, rather than waiting for me to prove there isn't such celestial brewing equipment. Put simply, just because it's possible to assert something that is blatantly unverifiable doesn't make it true.
My favourite example of this phenomenon is a drogulus ~ an entity whose presence is unverifiable, because it has no physical effects. The atheist philosopher A.J.Ayer coined it as a way of ridiculing the belief system of his friend, the Jesuit philosopher, Frederick Copleston.
In 1949 Ayer and Copleston took part in a radio debate about the existence of God. The debate then went back and forth, until Ayer came up with the following as a way of illustrating the point that Copleston's metaphysics had no content because there was no way of testing the truth of metaphysical assertions. He said:
"I say, 'There's a "drogulus" over there,' and you say, 'What?' and I say, 'drogulus' and you say 'What's a drogulus?' Well, I say, 'I can't describe what a drogulus is, because it's not the sort of thing you can see or touch, it has no physical effects of any kind, but it's a disembodied being.' And you say, 'Well how am I to tell if it's there or it's not there?' and I say, 'There's no way of telling. Everything's just the same if it's there or it's not there. But the fact is it's there. There's a drogulus there standing just behind you, spiritually behind you.' Does that makes sense?"
Of course, the natural answer Ayer was waiting for was "No, of course it doesn't make sense." Therefore, the implication would be that metaphysics is like the "drogulus" ~ a being which cannot be seen and has no perceptible effects. If Ayer can get to that point, he can claim that any kind of belief in the Christian God or in metaphysical principles in general is really contrary to our logical and scientific understanding of the world.
So, what am I saying?
That is a complicated question - it's not just synonymous for "what is my argument?".
The examples above demonstrate that language is a source of confusion. It's possible to ask, "what am I saying?" in a more fundamental way - synonymous with "what do words mean?". Habit or familiarity appear to lead to lazy thinking: some thing must be true because it's repeated in such-and-such a way by lots of people. Yet, when the way that thing is expressed in language is explored it becomes obvious it's actually confused and a tad silly.
Ask yourself, what is the meaning of "life"? (Or any other word or turn of phrase.)
So, what's my argument? ;-)
If language is such a blunt tool and endless source of confusion how are we to discuss important subjects such as how to lead a good life? How are we to share complex and confusing ideas such as our dreams, loves and losses?
I'm not sure.
It might be best to merely act, observe, reflect and adjust. Know me by what I do, how I act and the way that I change myself rather than by what I might say (to be clear, the content of this article is covered by this assertion).
Like I said, file this under Nicholas-is-a-smart-arse.
As always, I'd love feedback, constructive critique and ideas about such raw thoughts.
Title: The Strikingly Uniform, Highly Turbulent Interstellar Medium of the Most Luminous Galaxy in the Universe
Authors: T. Diaz-Santos, R. J. Assef, A. W. Blain, C.-W. Tsai, M. Aravena, P. Eisenhardt, J. Wu, D. Stern, & C. Bridge
First Author’s Institution: Nucleo de Astronomia de la Facultad de Ingenieria, Universidad Diego Portales, Santiago, Chile
It’s year 109 post-Big Bang, and the galaxies are awakening.
It’s a busy time of growth in the life of our universe’s galaxies. Galaxies are booming with new stars—star formation is going to peak in a billion years or two, then drop precipitously thereafter. Also growing heartily are supermassive black holes, buried deep in the heart of each massive galaxy. Some are already a staggering billion solar masses. In a billion years or so, they too will be undergoing peak growth. It’ll never quite be the same again in the history of the universe, nor in the lives of its then-adolescent galactic citizens.
This era of growth is one in which some of the brightest galaxies we know of have been found. The brightest galaxy in the universe that we know of to date, W2246-0526, lives at a redshift of z ~ 4.6 (about a billion years after the Big Bang) and radiates at an astounding rate of 3.5 x 1014 L☉, about 10,000 times brighter than a typical star-forming galaxy you’d see today, such as the Milky Way. Its cryptic name hides a clue to why it’s so bright—its first letter is an abbreviation of the name of the telescope that discovered it, the space-borne Wide-field Infrared Survey Explorer (WISE), which was designed to image the entire sky in the infrared and find the most luminous infrared galaxies in the universe. W2246-0526, in fact, appears to radiate almost entirely in the infrared, unlike typical galaxies, which prefer to radiate at visible and UV wavelengths. Infrared emission is typically radiated by hot dust, thus the galaxy has been classified as a Hot Dust Obscured Galaxy, or Hot DOG for short.
What heats up all that dust? At the heart of such galaxies is a supermassive black hole that’s consuming the gas surrounding it, a system called an active galactic nucleus (AGN). As the gas is consumed by the black hole, the gravitational potential energy lost by the gas is converted in part into heat, causing the gas to heat up to millions of degrees. Gas this hot glows bright in visible and UV light. The staggering amounts of dust surrounding a Hot DOG absorbs the light, however, but reemits it in the infrared, giving rise to Extremely Luminous InfraRed Galaxies (ELIRGs). Given the extravagant luminosity of W2246-0526, it must be illuminated by a powerful AGN, powered by a ravenous and rapidly growing black hole. But rapid growth is often accompanied with growing pains, a rule from which galaxies not exempt—the black hole draws the material it accretes from the same reservoir from which its host galaxy forms stars. Thus both the galaxy’s AGN and star formation may shut off quickly thereafter.
At what stage might W2246-0526 be in this process? The authors of today’s paper sought to answer this question by mapping the galaxy’s star-forming gas, the interstellar medium (ISM). To do this, they imaged the galaxy in one particular emission line of singly-ionized carbon by which much of the ISM cools, thus making it a good tracer of the ISM (see these notes, starting on page 4, for more details). What they found was striking. They derived the projected velocity profile for the galaxy, which revealed that it exhibits minimal rotation, indicating that there is no significant disk (see Figure 1). In fact, the velocity dispersion—a measure of how randomly the gas is moving, typically small compared to the rotation velocity for a typical spiral galaxy—was found to be larger than the rotation velocity (see Figure 2), and among the highest observed in any galaxy. High velocity dispersions can signify that the gas is extremely turbulent. But that wasn’t all—the authors found that the velocity dispersion was surprisingly uniform, indicating that the highly turbulent gas must fill the galaxy up to several kiloparsecs from its center, possibly by expanding isotropically.
Why might W2246-0526’s ISM have the shape and dynamics that it does? The authors point out that the galaxy is so luminous that its central AGN can blow the dust surrounding it out by force of the light it radiates. They also point out that should the AGN transfers even 10% of its energy to the ISM, it’ll cause an isothermal bubble of gas to expand isotropically. Thus the gas and dust isn’t stable—it’ll soon be blown away, lifting the heavy cloak of dust that surrounds the fiery AGN inside. It looks like we’ve caught the most luminous galaxy in the universe at a special phase in its life, a point where it’s on the brink of emerging from its dusty cocoon to become an unobscured AGN shining in its native optical and UV light.
Updated numbers for physical properties of the comet, and a few interesting images of surface features and surface changes on Churyumov-Gerasimenko.
At NASA's Johnson Space Center, Preparing for the Future of Human Spaceflight by The Planetary Society
As NASA kicks off a multi-decadal effort to send humans to Mars, the agency's traditional human spaceflight centers have had to adapt to new challenges—often more programmatic than technical.
November 18, 2015
TV HMV have emitted details of a series of classic Doctor Who boxed set re-releases to coincide with no particular reason at all. Probably Christmas. So far there are three in all, each containing three stories.
An Introduction to the Third Doctor brings Spearhead From Space, The Dæmons and The Time Warrior.
An Introduction to the Fourth Doctor has Robot, Genesis of the Daleks (yes, again) and Pyramids of Mars (again).
An Introduction to the Fifth Doctor does Castrovalva, Earthshock and The Caves of Androzani.
From a fan perspective, I'd only quibble with the Tom release from the perspective that it only reflects the Hinchcliffe years and young fans might already have Pyramids after its inclusion on one of the SJA releases. I'd probably have swapped it out for City of Death, but I'm bound to say that.
The inclusion of the regeneration seems logical but also sort of odd because such things never reflect the given Doctor at their moment of apogee. In all three cases the version of the Doctor who appears isn't representative at all. Plus they were already included in that astonishingly weird Regenerations set from a few years ago.
Since these are bound to sell like hot samosas, let's see if I can guess what'll be on the other sets.
An Introduction to the First Doctor should have An Unearthly Child, The Dalek Invasion of Earth and The Time Meddler.
An Introduction to the Second Doctor might have The Tomb of the Cybermen, The Mind Robber and The War Games,
An Introduction to the Sixth Doctor could have The Twin Dilemma, Vengeance on Varos and Revelation of the Daleks.
An Introduction to the Seventh Doctor will be Time and the Rani, Remembrance of the Daleks and The Curse of Fenric.
An Introduction to the Eighth Doctor will be the TV movie, The Night of the Doctor and a rerelease of Bidding Adieu. Or not. Frankly he needs no introduction. Ideally Big Finish will tie-in with Storm Warning, The Chimes of Midnight and Blood of the Daleks.
An Introduction to the Ninth Doctor will be Rose, Dalek, The Empty Child/The Doctor Dances and Bad Wolf/Parting of the Ways.
An Introduction to the Tenth Doctor. Oh I don't know. The Christmas Invasion, Blink, The Silence in the Library/The Forest of the Dead and Planet of the Dead.
An Introduction to the Eleventh Doctor. The Eleventh Hour, Vincent and the Doctor, The Girl Who Waited and Time of the Doctor.
It's increasingly hard to choose in the revivals because so many episodes are interrelated and the quality in some cases is so high. Also the consensus is pretty solid about the great works of the classic series whereas the revival is still in flux. The confirm classics of the Tenth Doctor era, Human Nature/The Family of Blood and Blink are about how he's absent.
What would you choose?
- Title: High Resolution 8 mm and 1 cm Polarization of IRAS 4A from the VLA Nascent Disk and Multiplicity (VANDAM) Survey
- Authors: Erin G. Cox, Robert J. Harris, Leslie W. Looney, Dominique M. Segura-Cox, John Tobin, Zhi-Yun Li, Lukasz Tychoniec, Claire J. Chandler, Michael M. Dunham, Kaitlin Kratter, Carl Melis, Laura M. Perez, Sarah I. Sadavoy
- First Author’s Institution: Department of Astronomy, University of Illinois at Urbana-Champaign, Urbana, IL
- Paper Status: Accepted to Astrophysical Journal Letters
When explaining stellar formation to unfamiliar ears, gravity usually comes off as the star of the show. A shock impacting a cloud of gas and dust will produce a slight overdensity of material, allowing the tug of gravity to collapse this region into a smaller and smaller size, heating it up along the way and forming a disk through conservation of angular momentum. Through gravity’s unwavering grasp, the center of this collapsing cloud becomes hot and dense enough to trigger fusion, and voila – we created a star. Simple, right? Unfortunately, we forgot about a key and poorly understood player: magnetic fields.
Effects from magnetic fields have ability to both enable collapse (e.g. through ambipolar diffusion) or hinder collapse (e.g. through magnetic pressure support) of large star-forming clouds, and at smaller scales these fields mediate wind launching, outflows, and jets. Magnetic fields are also believed influence if a protoplanetary disk can form and how it will accrete, which means they are integral in the formation of planets around young stars. Magnetohydrodynamic (MHD) simulations can give us some sense of this process, but adding magnetism into the complexity of hydrodynamic simulations is not an easy task and can result in ambiguous or contradictory results. To truly understand the role of magnetic fields in star and planet formation, we need to observe these fields around young stars. But how do we “see” magnetic fields? Obviously, traveling hundreds of lightyears to one of these baby stars with a compass would be highly impractical. Thankfully, we can infer a great deal about magnetic field structure by looking at how magnetic fields affect their environment – namely through observing the polarization of light.
Accretion disks around a protostar are embedded with dust. These dust grains are not necessarily spherical, as they can be elongated like an American football. Magnetic fields in the disk causes the “short” axis of these dust grains to align with its magnetic field lines, the opposite of what would happen to a compass needle that aligns with Earth’s magnetic field. Similar to polarized sunglasses that reduce glare by blocking certain orientations of electromagnetic waves, these aligned dust grains will emit radiation that has an electric field parallel to its “long” axis, causing it to be polarized perpendicular to the magnetic field, and thus giving us a direct link to the field’s underlying structure (see figure 1). Furthermore, dust grains have a hard time radiating at wavelengths larger than their physical size, meaning you can probe the polarization of millimeter- and centimeter-sized grains with millimeter wavelengths.
Today’s paper took one of the closest looks at polarization of light around a protostar to gauge its magnetic field structure. Their target was IRAS 4A, a Class 0 protostar (the youngest class of protostellar objects) residing about 230 parsecs away in the Perseus Molecular Cloud. Their weapon of choice: the Very Large Array (VLA). With an effective baseline diameter of 22 miles across, the combined power of the 27 radio dishes in the VLA provided unprecedented detail of IRAS 4A in millimeter wavelengths with an angular resolution of about one fifth of an arcsecond. This is equivalent to seeing two lightbulbs separated by a meter as separate sources, if you were in Chicago and the light bulbs were in New York. Since angular resolution increases linearly with wavelength and decreases linearly with telescope diameter, huge effective diameters are necessary to achieve this angular resolution for the long wavelengths investigated by the VLA .
Cox et al. measured the four Stokes parameters to understand the magnetic fields in the disk around IRAS 4A. These parameters indicate the total polarization, combined to determine the degree and orientation of linearly polarized, circularly polarized, and unpolarized light. Their results are mapped in figure 2, which shows the polarization of light around IRAS 4A at two different millimeter wavelengths. By analyzing contaminate sources of unpolarized (e.g. free-free) and polarized (e.g. synchrotron) emission, the authors were able to confirm that the polarization was due primarily to dust aligned with the magnetic field. They found that at small disk-sized scales the magnetic field wraps around the star, a morphology indicating that the magnetic field is from the circumstellar material rotating close to the central protostar.
Previous measurements of the polarization around this star at lower resolutions saw something quite different: an “hour glass” morphology in the magnetic field structure (see figure 3). This indicates a potential transition in the morphology of the magnetic field structure on larger spatial scales (~250 AU) to smaller spatial scales (~50 AU), though differences in the angular resolution and the particle size targeted by the two studies can also add to the discrepancy. As a test, Cox et al. synthesized poorer resolution with their data, and sure enough reproduced the hourglass morphology (figure 3).
What causes this circular morphology in the magnetic field? The idea is that as material falls from the envelope to the nascent disk it drags “frozen-in” magnetic field lines with it, resulting in a morphological change of the magnetic field from the envelope to the disk. The study also indicates that dust grain growth happens at early times in protostar evolution, since the larger dust grains targeted by this study are more readily polarized than small ones. Though there is still plenty to learn about magnetic fields in these environments, this study puts us closer to fully realizing their effects on star and planet formation.
November 17, 2015
Film As I sat down to write this, or rather endured in the several hours worth of writer's block which happened while I was cogitating on how to write this, I remembered that my first encounter with Andrei Tarkovsky was not as I'd assumed whilst flicking through the course booklet for the Falstaff & Gandalf Go To The Movies module of the MA Screen Studies course I attended at Manchester University but my third year housemate Ed while I was an undergraduate when he mentioned to me one evening while I was preparing dinner that he'd watched this film called Solaris that afternoon, which was one of his favourite films and was a bit like the Russian 2001: A Space Odyssey.
There are plenty of reasons why I mustn't have taken him up on his suggestion. Since this was the third year I would have been knee deep in my dissertation and various other essays and although I'd gulped many of my first breaths of international cinema in the first couple of years (see later) (probably this time next year at this rate) the idea of sitting down to watching a three hour Soviet space film would have been last the last thing on my mind. Plus unlike the second year when I didn't really like being in the same house as my housemates so spent a lot of time in the library watching videos, in the third year I rather liked them a lot. Plus I was just round the corner from the Hyde Park (see last week) so there wasn't much need for the VHS format which Solaris would have been on.
Since I do have writer's block:
Andrei Tarkovsky, Solaris and Stalker: The making of two inner-space odysseys
"Adding a frisson of autobiography, Tarkovsky initially planned to cast his ex-wife Irma Rausch as Hari. He then changed his mind, signing Swedish star Bibi Andersson, former collaborator of his directing idol Ingmar Bergman. But finally he settled on Natalya Bondarchuk, the beautiful Russian actor who had introduced him to Lem’s novel. Hari’s death scenes gained extra resonance in 2010 when Bondarchuk revealed she had an affair with Tarkovsky during the shoot, and attempted to kill herself after they split in 1972."
Auteur in Space:
"A new visual essay getting to the heart of Andrei Tarkovsky’s philosophical sci-fi masterpiece, Solaris (1972)."
One Scene: Solaris
"The first time I saw Solaris was on VHS in the mid-nineties. Even though the film affected me profoundly, I never watched it again until now. The richness of the images, the vividness of the mood, and the depth of the themes are so intense, they have simmered and lived in my mind for more than fifteen years just from that one viewing. Seeing it again, going from VHS to this new restoration, is truly a revelation. It's like owning a pristine 35 mm print."
In the Tarkovsky film "Solaris" what is the deal with the overly long highway scene through Tokyo?
"Do we "dream" when we get up in the mornings, only to return to the real world when we close our eyes and sleep at night, perhaps? Are our memories, residing within our minds and more directly personal and pure to us than incoming "new" interactions, more real than the world around us that we also experience strictly through interpretations of thing within our own minds? Can we perhaps be free to choose between the two, and what does the answer (be it yes or no) say about the human condition?"
Will Self and Mike Hodges on Solaris:
"Will Self and director Mike Hodges discuss Andrei Tarkovsky's film Solaris, based on Stanisław Lem novel."
Again, with 20% more existential grief.
"Steven Soderbergh: Well, I guess memory was an issue that I dealt with a couple of times before and this seemed to be a very interesting way of talking about memory - having a character that was a physical manifestation of someone's memory seemed like a very intriguing idea to me. And I wasn't at all of a mind that the Tarkovsky film could be improved upon; I thought there was a very different interpretation to be had. The analogy that I use was that the Lem book, which was full of so many ideas that you could probably make a handful of films from it, was the seed, and that Tarkovsky generated a sequoia and we were sort of trying to make a little bonsai. And that was really what we were doing - I took a very specific aspect of the book and tried to expand Rheya's character and bring her up to the level of Kelvin."
Authors: P. Pietrukowicz, et al.
First Author’s Institution: Warsaw University Observatory
Status: Accepted to ApJL
For years, astronomers have been trying to reconcile the missing satellites problem: cosmological simulations of dark matter show orders of magnitude more dwarf galaxies than we actually observe. Another closely-related issue is the too big to fail problem. Too big to fail points out that not only do we not seem to have enough satellite galaxies, they are also not as massive as we’d expect. We could be missing satellites because they don’t have enough stars and gas to detect, but the too big to fail problem says that some of the satellite galaxies we’re missing are so big that they must have enough stars and gas to be detectable. One way that astronomers have been trying to solve the former problem is simply to go out and try to look for these missing satellites
Last February, a group of astronomers claimed to have found a new dwarf galaxy close to the plane of the Milky Way. They had found what they believed to be four classical Cepheid variables clustered within one degree of each other at a distance of 90 kiloparsecs and a Galactic longitude and latitude of l=-27.4 and b=-1.08 respectively. The authors had discovered these Cepheids using Ks-band (2.14 µm) data from the VISTA Variables of the Via Lactea (VVV Survey). Cepheids are rare objects, so finding four of them in such close proximity on the sky and at that distance from the center of the Milky Way is unusual—unless, of course, they happen to be part of a previously undiscovered dwarf galaxy. The lightcurves for these objects are shown in Figure 1.
This paper was unusual and exciting not only because it could potentially have helped bring us closer to resolving the anomalies of the missing satellites and the too big to fail problem, but also because the authors claimed to have discovered a dwarf galaxy using Cepheids they had discovered in the near-infrared, very close to the Galactic plane. Both of these happen to be fairly novel methods for discovering Cepheids and dwarf galaxies. Cepheids have bigger amplitudes in the optical bandpasses, so data from optical wavelengths are usually used for finding them, although a few recent papers have tried to look for them in the infrared. This group of astronomers chose near-infrared, however, because it is less sensitive to dust. This was useful since they were looking close to the dusty plane of the Milky Way.
And here is where the “cautionary” part of our tale comes in; just a few weeks ago, another group published a response to the dwarf galaxy discovery paper. This response paper is the focus of today’s astrobite. The authors had looked at the four objects discovered in the previous paper and found that it was likely that none of them were Cepheid variables. Moreover, two of the objects appeared not to be variable stars at all. How can this be?
A Cautionary Tale
First, let’s take a look at what the authors of this paper found. Figure 2 from their paper shows the light curves in the I-band of three of the four objects reported in the previous paper. This data was taken from a different survey, the OGLE Galaxy Variability Survey (OGLE GVS) and folded with the periods (you can think about this as taking the time-series data and plotting it mod the period) given by the previous paper. The objects are referenced by the names they were given the discovery paper, S1, S2, S3, and S4. As we can see from Figure 2, S1 and S2 seem not to be periodic with the periods listed in the first paper. In fact, they are most likely not periodic at all. Their power spectra, shown in Figure 3, reflect this as well, in addition to showing no presence of any other periodicity.
S3 on the other hand, does seem to be variable with the period given in the previous paper, as can be seen in both its light curve and its power spectra in Figures 2 and 3. However, the shape of its light curve doesn’t match what would be expected from a classical Cepheid with its period (see Figure 4 for some comparisons with other Cepheids). This paper suggests instead that S3 might be a spotted star. Even if you haven’t spent much time reading about astronomy, you’ve probably heard of sunspots. Starspots are the same idea (except on other stars besides the Sun) and cause non-variable stars to look variable, depending on how “spotted” a star happens to be. A spotted star is one that has much more variability than the Sun, to the extent that it could reproduce the variability that is shown here. This sort of variability changes over the course of years, but can look constant on short time scales, like the time scale of the data in the previous paper.
Finally, S4, with a period of 13.9 days, was found to be too faint to be detected in the OGLE GVS, so the authors of this paper were not able to look at this object in the I-band as they did with the others. Based on the light curves shown in the first paper, however, they deem that the shape is very different from what one would expect—Cepheids longer than 10 days have near-infrared light curves that are more sinusoidal in shape than the sawtooth-like pattern shown in the original paper (although the sawtooth pattern is characteristics of shorter-period Cepheids). They then conclude that S4 is likely not a Cepheid either.
So what can we learn from all this? Well, first and foremost, we have to be careful when searching for Cepheids in the infrared. And, just last week, another astrobite discussed the difficulty of disentangling stellar effects, like sunspots, from the presence of a planet. It’s probably worth thinking harder about what conclusions we draw when we have a massive amount of data and a few unusual objects in it that meet our criteria. They might not always be what we’re looking for.
Curiosity update, sols 1109-1165: Drilling at Big Sky and Greenhorn, onward to Bagnold Dunes by The Planetary Society
Since my last update, Curiosity drilled two new holes, at Big Sky and Greenhorn, and is now approaching Bagnold Dunes.
The OSIRIS-REx spacecraft continues to make steady progress toward launch in September 2016. Environmental testing is now underway to ensure the spacecraft is ready for the many conditions it will experience over its mission.
November 16, 2015
Life There's not much that can be said about Friday night's attack which hasn't already by far more articulate people than me and with much closer involvement but since this is what it is and even though in the past few years I've tended to ignore events such as this in order to concentrate on posting some links to something you've probably already read or a video of three people on a stage discussing a film you've probably never seen, I want to record this.
Paris is the only city I've visited abroad and even though it was for just three days in 2002, just after I began writing this blog so you can read something about the visit here, yet it's somewhere I feel like I know. I can still smell it, especially the cafe I stopped at on the Seine, and see it, mostly the view from the top of the Eiffel Tower. There have been many days since and yet those three days I spent in Paris remain vivid.
There was a time when this kind of story broke when I'd immediately turn on a news channel and be transfixed. But in more recent years I've decided that unless I'm directly involved it's best to leave well alone until the facts make themselves known, especially when it's an ongoing event. Before live television, news would often take hours even days to reach us and at a certain point I decided that it's best to wait until rather than hearing what's possibly happened to hear what has.
On Friday night that led me to sticking with Children In Need while the horrors in Paris unfolded across Tweetdeck. The effect was, as you can imagine, surreal. On television, Nadine from Girls Aloud was being crammed into a telephone box with Bernie Clifton and Wayne Sleep whilst my timeline was filled with breaking news about explosions and death tolls and a general sense of shock amid some obvious autotweets often badly timed.
As a television event, Children in Need isn't what it once was. Like Comic Relief earlier in the year, it's clear that the budget has been shaved so there's less variety in the special film material, a greater reliance on the live show expositing on donations made, a larger number of appeal films. This was the first time I'd watched in years, at least until after midnight, usually having bailed more recently once the Doctor Who scene had been shown. Star Wars superceded this year.
Some people moaned online about the BBC sticking with the charity event instead of dropping it in favour of coverage of Paris. Pre-Freeview this might have made sense as not everyone had access to the BBC News Channel and if it had been a night of pre-recorded programmes, perhaps. But what good would it have done to have stopped broadcasting from that studio, sending all the presenters,entertainers and audience members home and ending the appeal?
Instead they continued even as they must have known what was happening elsewhere. Some of the jollity in the Children in Need studio did feel forced. Every now and then there'd be a micro-expression from a presenter or entertainer which suggested they were wondering what they were still doing there, but they continued knowing full well that if they stopped it be giving in to the terrorist's demands to fracture normality, or what normality there was in watching Wayne Sleep dance.
Eventually I went to bed at 1am, waking the following morning to hear some of the aftermath on the Today programme. But I didn't really understand the timeline until seeing John Sweeney's superb report on tonight's rare live episode of Panorama which explained just how the atrocities were carried out and potentially by whom. As to the detail, I have few words. I don't understand why humanity as a species can be capable of so much love and yet all of this hatred too.
Something I noticed recently while wearing my (completely invisible but highly attractive) writing teacher chapeau is that the welter of SF subgenres and categories of fiction generally are terra incognita to a fair number of newer writers.
I’m okay with this. We begin as readers and viewers, after all. Many people coming into my UCLA courses are curious about speculative fiction. They aren't necessarily book-collecting, con-going, award-nominating fans. They've watched a fair chunk of genre TV and film offerings; they're up on the MCU, they can tell a spaceship from a unicorn and they even usually know which is the fantasy construct. They might have read a certain amount of fiction within their one or two favorite genres, or at least have read Harry Potter and his ilk to their kids.
Maybe this isn't especially nuanced, but it is a decent starting point. It’s only when we begin to write--and to consider selling what we write--that mincing the distinctions between, say, near-future SF and cyberpunk can become important. That's when you've moved beyond searching the bookstore for something you'll enjoy reading for pleasure. Finer categorization becomes useful when you’re aiming at a particular market, writing a review... or when you’re sitting in a workshop trying to articulate why the space unicorns just aren’t meshing well with the alternate history manor house homicide, with cyborgs, in a given piece.
So, a flashback: when my 2009 book, Indigo Springs, was in the pipe for publication, I took what was really my first run at writing publicity stuff, generating press releases and bits and pieces of blog stuff and other material whose primary thesis was: Hey, my book is so cool, buy my book, buy it buy it, OMG, candy giveaway, wheee!! Only, you know, subtle.
One of the things I never quite managed was to come up with was a pithy label that captured its particular mash-up of urban fantasy and environmental science. This is a book whose main character finds a wellspring of magic that has become condensed--more powerful--and intensely toxic because of human impacts on the magical ecosystem. She then unleashes it into modern-day Oregon, creating a massive uninhabitable monster-infested forest that is both immensely contaminated and, as a result, weirdly enchanted.
I played with words Eco-pocalypse a lot, but say it aloud and you'll hear how appalling and clunky that is. Apoca-green-alyse. Apocaenvirochockalocka.... argh! Why couldn't I just write a sexy vampire novel like that nice boy down the street?
Anyway. The book came out and one of the first reviews had this word, ecofantasy.
Oh! That's a thing? Thank goodness, I thought, and it’s catchy, too.
Except: If it is a thing, who else is doing it?
So I went looking. I didn't find a ton of stuff. Ecofantasy may be a thing, but it's not necessarily a huge one. My brain, which always serves up junk as its first ten answers, kept coming up with my favorite works of ecological science fiction, like Derryl Murphy’s “The History of Photography” or Bruce Sterling's Heavy Weather and realizing, “NO! Not Fantasy! Too sciencey!”
In time, though, I did find other works, like Walter Jon WIlliams's Metropolitan, where magic is a metered public utility. There's Harry Turtledove's Case of the Toxic Spell Dump, Mortal Love by Elizabeth Hand, and Nalo Hopkinson's Midnight Robber.
Even Patricia Briggs commits ecofantasy in one of her subplots--she’s got a storyline in one of her Mercy Thompson novels where werewolves get kidnapped for use as lab animals by a Big Pharma company.
Looking at these and thinking it over, I came up with a few concrete ideas to spell out what ecofantasy is:
Do you have favorite books or stories that fit into this frame? I am always looking for more examples.
The terror attacks in Paris, just like other terror attacks before them including 9/11, are a challenge to core democratic values of tolerance and freedom of speech. Turning against Muslims or against refugees is a terrible response as it only confirms the apocalyptic ideology of the attackers. Insisting on a primacy of individual privacy and government secrecy and continuing down a path of moral relativism is, however, equally doomed as a response.
Let me start with moral relativism. Now would be a good time for us to assert that all humans, independent of gender or sexual orientation, should have the same human rights, including self determination. That those rights take precedence over whatever anyone may claim their religion entitles them to impose on others.
That we all can and should say things openly that are critical of governments, corporations, scientists, and even of religions and ideologies. That free speech is more important than someone feeling offended by it and that physical violence or threats of physical violence are never a legitimate response to speech (nor that threats of or incitement to physical violence should be protected as free speech).
That all progress arises from a critical dialog between humans who, while acknowledging their emotions, are not controlled by rage or hatred or for that matter by fear.
We have to reassert our confidence in the potential for democracy and democratically elected governments, no matter how deeply flawed the current ones are. As Churchill pointed out, democracy is far preferable to all the alternatives. We will have plenty of time to fix democracy, but not if we abandon it now hoping that dictatorships will be more successful in fighting terrorism.
So far maybe not much of what I have said is controversial, so here it comes: We also have to let go of the mistaken idea that individual privacy *and* government secrecy are necessary for democracy. Terrorists can and will hide among us and attack us from within for a long time with the help of sympathizers and aided by the anonymity and segregation of much of modern society.
Just to be perfectly clear: I am also strongly against government secrecy. I am against secret detentions, secret courts, and secret surveillance. None of those are compatible with democracy. But I am staunchly for collective intelligence. Collective intelligence in this case against terrorism, but also more broadly against crime and most importantly as a basis for improving education and healthcare. I cannot see how society could avail itself of the benefits of collective intelligence in any form of government other than a transparent democracy. And conversely it makes no sense for democracy to deny itself those benefits.
Insisting on privacy because we fear our own governments will continue to pit citizens against secrecy-seeking governments in a spy versus spy society. Many will protest that we are already there. Maybe so, but why double down on a mistake? Snowden’s revelations have given us a unique opportunity to start over. I would pardon Snowden on those grounds alone.
Governments can and should tell their citizens what information they are collecting and how they are using that information. And companies should disclose which of these programs they participate in. Any and all such programs should have oversight by elected politicians and transparent reporting on their scope and effectiveness.
As for the potential for collective intelligence to help, we see it all around us on the Internet. From the uncannily accurate do you know so-and-so suggestions on Facebook and LinkedIn to the related products on Amazon. I can also observe the effectiveness of collective intelligence from behind the scenes in many of our investments and in particular with Sift Science which does fraud detection. Combining a lot of data really does work.
Democracy, human rights and progress through critical dialog and collective intelligence. We need all of those more than ever.
November 15, 2015
After this week’s horrific terror attack on Paris, we commissioned artist Stacy Bias to depict the intentions of ISIS in targeting ordinary French civilians. The far right both in France and abroad is already mobilizing to use this tragedy for an Islamophobic agenda, with French National Front leader Marine Le Pen using deliberately warlike language, while across the Atlantic, senior GOP politicians are ranting about Muslims and trying to twist it into an argument for more guns.
A silver lining amid the horror and the cynicism is that people across the world are answering back with confidence and dignity. Fascists in Lille demonstrating today against Islam were chased out of town, and social media is alight with the thoughtful responses of a population who is already prepared to fight the forces of reaction and hatred that ISIS is trying to activate.
TV As a multi-format franchise platform, Doctor Who has found its narrative being not just being communicated through television, but books, audiobooks, audio plays, theatre plays, short fictions, comics, graphic novels, computer games, sweet cigarette cards and even interactive exhibitions. Much of the time these are generally conventional applications of the usual ingredients of a Who story (as explained by Paul Magrs here) rendered in text or sound or pictures with the general ambition of suggesting a story which might other appear on television if only the programme makers had an infinite budget and schedulers the wherewithal to dedicate many hours to the same scenario.
But every now and then a writer decides to be a bit experimental and offer the kind of adventure which could only be told in that way in a given format by utilising its unique properties. Amongst the BBC Eighth Doctor novels we find stories written as memoir (The Turing Test), gothic romance (The Banquo Legacy) or popular history book (The Adventuress of Henrietta Street). At Big Finish there are pieces like Flip Flop in which two pair of episodes can be listened to in any order and there have been several musicals (Doctor Who and the Pirates). Then there’s James Goss whose writing for AudioGo who turned their presumed audiobook licensing limitation into a strength, with the award winning Dead Air in which the Tenth Doctor himself tells a tale in the first person eventually tipping out over into something akin to drama, but not quite.
The television series itself has remained relatively unadventurous, especially in its first television run, sticking for the most part with classical narrative storytelling. Apart from Bill and Tom breaking the fourth wall now and then, Spearhead from Space is the notable departure but apart from being needfully shot on film due to strike action in such a way as to resemble one of those ITC adventure series the Pertwee years otherwise aspire to be, it's script might have worked perfectly well shot on video in a multi-camera set up. Rarely did it do anything as radical as becoming animation or mimicking a Rupert the Bear strip as happened in Doctor Who Magazine. Anthony Root did not attempt to mount a story in the style of a Horizon documentary, for example.
Now we have Mark Gatiss’s Sleep No More, which with its found footage format and third person shooter aesthetics is about as radical as Who has been since its resuscitation. The first television episode ever without a traditional title sequence - replaced instead with the digital word search above (leading to the continuity announcer actually telling us the title of the episode and its author beforehand), it also shares with Love and Monsters, perhaps the shows most radical episode until this point, an unreliable narrator. But unlike Elton, Reece Shearsmith’s creepy Rasmussen is a rare example of an antagonist as our point of view figure cynically dragging us through the expected functions of a Doctor Who story for his own nefarious ends.
The “found footage” genre has become a bit of spent force in cinema. Box office is dwindling on the Parisnoremal Activity series and there’s a sense that viewers have become too accustomed to its tricks, of having to mentally justify just why its characters persist in films even when their lives are in mortal danger, why there’ll always be shots which would have been impossible to capture and how inopportune jump cuts provide an opportunity to either move the story along or bewilder the audience as to the action in between. Plus there’s always the moment when the writer has to explain exactly how all of this b-roll ended up being edited into something approaching a story, especially if it’s being sourced from multiple cameras.
As if to head off some of this criticism, Gatiss ingeniously makes the collection and editing of the footage not just central to the mystery but also the horror. How early are we meant to notice Clara’s participation in the surveillance? Even if eagle eyed viewers might have noticed that Morpheus has infected her early on, they might have been thrown by the design of the soldier’s helmets, which subtly suggest the idea of a camera without confirming it. Also the episode is careful to cut away from the point of view of the characters when addressing each other until it’s absolutely necessary to save the thing from resembling an episode of Peep Show and no takes longer than a few seconds so we’re not thrust into a Strange Days like nightmare.
Except, and you can tell how important this use of “except” is since I’m deploying one of my prop words when I’m trying desperately not too, I wasn’t scared. Not at all. Ever. I was unsettled, I’ll give them, unsettled. By throwing out the title sequence, Gatiss disorientates the fan trained to expect the usual scream and crash into the music so that he can deploy numerous moments the Ron Grainer theme would surely intrude and confuse us as to how long we’ve been watching the episode, assuming we’re watching in the dark and can’t see a nearby clock. But this arguably also has a distracting, Brechtian alienation effect as we’re constantly wondering why the title sequence hasn’t happened yet.
Analysing why an individual isn’t scared by some horror is probably as foolish as attempting to understand why some people don’t find a joke funny. Old school horror fans tend to find the jump scares of the films produced by Jason Blum or James Wan pretty tedious, but the box office on the likes of Sinister and The Conjuring have been enough for them to spawn sequels and franchises. Most of the one-word adjective direct to stream horrors listed on Netflix all look the same to me (even after I’ve watched them) but there’s clearly enough of a market for them to remain in production. A glance through social media suggests that Sleep No More did work for a few people and that’s good.
Let’s be foolish, nonetheless. Perhaps part of the problem is that for all the extensive world building, the actual supporting characters are hopelessly generic and lacking in dimension, which makes them difficult to relate to for the purposes of peril. Unlike the aquatic based two parter earlier in the season, apart from some incongruous religious conviction, we learn little or nothing about the soldiers visiting the station beyond what morsels flashed past our eyes at the start of the episode, but unlike The Waters of Mars which utilises a similar tactic, few of them make enough of a mark for us to care about their demise. While I’m not expecting something of the magnitude of the chops in gravy scene from Planet of the Dead, there’s plenty to be said for inserting some character as well as plot.
The monsters are disappointing too. For all its improvements in the writing of its main character and general intrigue, this season’s new alien interlopers have either been memorable for the wrong reasons or simply lacking in form, which is a pretty literal description of the sandmen. Found footage films tend to have pretty indistinct adversaries so as to hint to the viewer as to the horror which may leap out at them from the shadows in the corner of the frame because as Kirk Douglas’s film producer character in The Bad and the Beautiful says, “the dark has a life of its own. In the dark, all sorts of things come alive.” The directors of The Blair Witch Project took this a step further by scaring the bejesus out of its own actors during production and having them video the results.
But they rarely stay there and the reveal is usually horrendously disappointing. Cloverfield becomes considerably less interesting when the beasty is presented in full and the same occurs here. Although they’re centrally linked to the premise of the story in a way which is familiar to listeners to the less experimental AudioGo exclusives, exactly how the sandmen can kill you is never properly shown and when they do appear it's for just long enough for them to look like nothing so much as a CGI element. From the minute the hand is to dusted as a door closes, the section of my brain which deals with suspension of disbelief and the uncanny valley went on strike and decided to take a hiking holiday which meant that at no point did I feel that any of these characters were under threat.
The idea of a corporation designing a product designed for mass surveillance which not only turns insomnia from a bug into a feature and provides the ability to record everyone's movements without them being aware of it is especially apt at this moment when everything we do online is being recorded on huge databases apparently accessible by the security services. With other things on its mind the episode doesn't really explore this idea in detail and perhaps that is another criticism, that Gatiss packs in enough ideas for a whole season of Doctor Who without giving them lip service. Imagine if Big Finish or a novelist had run with the idea of a planet whose entire population doesn't sleep. Perhaps their nightmares come to life instead.
Nevertheless, the episode isn’t a complete failure if you remove the horror baselines. A piece of genre television this experimental simply can't be. One of my favourite moments of this series is when we see the usual expositional post-TARDIS landing business between the Doctor and Clara from the viewpoint of the secondary characters as they approach them rather than the usual reverse and we can appreciate just how eccentric they can seem. I can't really hate any episode with a Doctor Who quote as a title which then takes a few moments out in the middle to explain where it's from. Also, was it a conscious decision to pay tribute to Back To The Future so close to its anniversary?
If the all too spoilery publicity synopsises for the final three episodes are a guide, the show’s experimentation doesn’t end here. Given how none of this season’s episodes have been stand alones, will Clara’s condition continue into Face The Raven and be part of the reason she ends her time on the TARDIS? Or will it, as the Edgar Allen Poe piece suggests, be linked to her addiction to being in the Doctor’s company and what of his slightly odd suggestion in The Zygon Inversion that he’s missed her for a month rather than five minutes? My theory is that we’ve been watching this season in the wrong order for reason to do with mortality and time travel. Is it possible that the impossible girl won't have a happy ending?
November 14, 2015
TomC did the work of porting across the LinuxCNC based controller of the triangle machine tool on an ancient heavy desktop with a parallel port to a Beaglebone Black running Machinekit. The good news is it’s all back to working again, and I can access the UI through X11 over a network over a USB serial port, so there’s some latency, but who cares.
We heard that the BB used a special unit to generate the realtime pulses, rather than relying on a somewhat bogus “realtime” linux build. We began our investigation of the code and documentation.
The Programmable Realtime Units (there are two) attached to the ARM processor are small processors that share a few kilobytes of memory between themselves and with the the main processor and run at 200MHz in a very predictable manner, with each instruction taking one or two cycles. This provides a potential resolution of 5nanoseconds and an order of magnitude faster than the 16MHz arduino I was using for my anemometer experiments.
(My intuition is that this tech is very similar to GPUs, which have thousands of special purpose processors with their own tads of memory, shared memory, unique characteristics, and protocol for communicating with the main CPU.)
The Beaglebone has shedloads pins of all kinds and has the complexity of Manhattan Island compared to the Arduino’s more understandable farmyard size. In terms of learning how to use these things, less is most definitely more — you’ll get far more done in a month with an Arduino than with a Beaglebone if you are a Dummy.
from mmap import mmap import time, struct # page numbers from the 4973 page AM335x Sitara reference manual # codes given in p182 table 2-3 GPIO1_offset = 0x4804c000 GPIO1_size = 0x0fff BIT28 = 128 # for pin P9_28 # values from p4877 section 25.4.1 GPIO_OUTPUTENABLE = 0x134 GPIO_SETDATAOUT = 0x194 GPIO_CLEARDATAOUT = 0x190 # memory map the IO address space to a Python object f = open("/dev/mem", "r+b" ) mem = mmap(f.fileno(), GPIO1_size, offset=GPIO1_offset) # set flag for pin P9_28 to output reg = struct.unpack("<L", mem[GPIO_OUTPUTENABLE:GPIO_OUTPUTENABLE+4]) mem[GPIO_OUTPUTENABLE:GPIO_OUTPUTENABLE+4] = struct.pack("<L", reg & ~USR28) # set and clear the pin for bit 28 every 0.2 seconds while True: mem[GPIO_SETDATAOUT:GPIO_SETDATAOUT+4] = struct.pack("<L", BIT28) time.sleep(0.2) mem[GPIO_CLEARDATAOUT:GPIO_CLEARDATAOUT+4] = struct.pack("<L", BIT28) time.sleep(0.2)
The critical table from page 4877 is where those magic numbers are obtained:
This is how we solve the problem caused by the bitpacking of all the pin values into one 32bit word. The corresponding bits in the GPIO_SETDATAOUT and GPIO_CLEARDATAOUT perform a corresponding set or clear. Otherwise, to set the bit we’d have to write:
GPIO_DATAOUT = 0x13C reg = struct.unpack("<L", mem[GPIO_DATAOUT:GPIO_DATAOUT+4]) mem[GPIO_DATAOUT:GPIO_DATAOUT+4] = struct.pack("<L", reg | BIT28)
and risk causing masking over-writes on all the other 31 bits we didn’t want to be altering if an independent process changed it during the gap between line 2 and line 3 above.
Moving on to the Machinekit code
The critical function is stepgen.c which in some sophisticated way controls the PRU and its code in pru_generic.p for the purpose of generating precisely timed pulses for stepper motors or servo motor drivers.
These amazing programs require further study under freedom 1 of the free software definition. Nevertheless, there are some interesting comments at the head of the file:
PRU GPIO Write Timing Details
The actual write instruction to a GPIO pin using SBBO takes two PRU cycles (10 nS). However, the GPIO logic can only update every 40 nS (8 PRU cycles). This means back-to-back writes to GPIO pins will eventually stall the PRU, or you can execute 6 PRU instructions for ‘free’ when burst writing to the GPIO.
Latency from the PRU write to the actual I/O pin changing stat (normalized to PRU direct output pins = zero latency) when the PRU is writing to GPIO1 and L4_PERPort1 is idle measures 95 nS or 105 nS (apparently depending on clock synchronization)
PRU GPIO Posted Writes
When L4_PERPort1 is idle, it is possible to burst-write multiple values to the GPIO pins without stalling the PRU, as the writes are posted. With an unrolled loop (SBBO to GPIO followed by a single SET/CLR to R30), the first 20 write cycles (both instructions) took 15 nS each, at which point the PRU began to stall and the write cycle settled in to the 40 nS maximum update frequency.
PRU GPIO Read Timing Details
Reading from a GPIO pin when L4_PERPort1 is idle require 165 nS as measured using direct PRU I/O updates bracking a LBBO instruction. Since there is no speculative execution on the PRU, it is not possible to execute any instructions during this time, the PRU just stalls.
That final paragraph amazingly suggests a less good response time than the 16MHz AVR using its SBIS function which can read and respond to a digital in within a single 62.5 nS processor cycle, unless it can redeem itself through an interrupt feature — which of course buggers up any special timing loops I might set up. Maybe that’s what we need the second PRU for.
Anyways, not being a Machinekit master, I tried some direct control of the PRU from Python using the amazing PyPRUSS library.
First things first, assuming the PRU assembly code is in a file called prucode.p, the Python test harness code is as follows:
# compile the file into prucode.bin import subprocess, os p = subprocess.Popen("/usr/bin/pasm -b prucode.p", shell=True) pid, sts = os.waitpid(p.pid, 0) # do this on the command line at start up if the device needs to be enbabled # echo BB-BONE-PRU-01 > /sys/devices/bone_capemgr.9/slots # run the complete cycle import pypruss pypruss.modprobe() # This only has to be called once per boot pypruss.init() # Init the PRU pypruss.open(0) # Open PRU event 0 which is PRU0_ARM_INTERRUPT pypruss.pruintc_init() # Init the interrupt controller pypruss.exec_program(0,"./prucode.bin") # Load firmware "prucode.bin" on PRU 0 pypruss.wait_for_event(0) # Wait for event 0 which is connected to PRU0_ARM_INTERRUPT pypruss.clear_event(0) # Clear the event pypruss.pru_disable(0) # Disable PRU 0, this is already done by the firmware pypruss.exit()
The mainloop of the PRU code looks like this:
MOV r1, 0xF00000 MOV r2, 1<<28 BLINK: MOV r3, GPIO1 | GPIO_SETDATAOUT MOV r0, 8 // loop 8 times SBBO r2, r3, 0, 4 // go HIGH!!!! DELAY1: SUB r0, r0, 1 QBNE DELAY1, r0, 0 //ADD r0, r0, 1 // commented slowdownop MOV r3, GPIO1 | GPIO_CLEARDATAOUT SBBO r2, r3, 0, 4 // go LOW!!!! MOV r0, 4 // loop 4 times DELAY2: SUB r0, r0, 1 QBNE DELAY2, r0, 0 SUB r1, r1, 1 QBNE BLINK, r1, 0
The output on the scope is as follows:
So, that’s 100nanoseconds for the HIGH and 80nanoseconds on the LOW.
A high loop delay of 7 instead of 8 results in 90nS HIGH and 80nS LOW because the DELAY1 loop is two instructions long or 10nS. DELAY1 of 9 gives 110 nS HIGH, and so on, so it’s all good, and you can extrapolate down to a theoretical delay of zero leaving 20 nS for the subsequent MOV r3 and SBBO r2 after the loop before before it goes LOW.
On the LOW side there are 40 nS that need to be accounted for outside the DELAY2 loop. In order of execution they are: MOV r0,4; SUB r1; QBNE BLINK; MOV r3; MOV r0,8; SBBO r2; which is 6 instructions that ought to add up to 30 nS, so two of them must be taking 2 cycles each to make up the 10 nS difference.
Luckily there’s a really useful set of training slides from Texas Instruments in 2009 where they specifically explain what’s going on as if to a human. Fancy that! Why the heck don’t they insert these prepared summaries for the purpose of teaching humans as an appendix to the official manuals?
Nearly all instructions (with the exception of memory access) are single cycle execution.
That accounts for the Store Byte Burst (SBBO) instructions taking two cycles each. The remainder of the time is due to some of the MOV instructions requiring two cycles, and others completing in one.
Turns out that MOV r3, X is a pseudo instruction composed of:
LDI r3.w0, (X&0xFFFF) LDI r3.w2, (X>>16)
This is obviously necessary because as each instruction is 32 bits long, you can’t fill it all with data, and the most you can load at a time is 16 bits into one or other of the words.
However, if you do LDI r3, X instead of LDI r3.w0, X it packs the top 16 bits with zeros, which is handy if X happens to be less than 65536, as the compiler PASM recognizes in the case of MOV r0,4 and MOV r0,8.
So, it’s all easy and adds up like that…
Not so fast!
What happens when I uncomment the single cycle instruction at slowdownop?
So about 50% of the loops are registering 100nS delay and the rest are giving 110nS delay instead of the 105nS delay I was hoping for.
When I zoom out the jitter is not cumulative.
It’s as if there’s an independent process that is carrying the GPIO_SETDATAOUT and GPIO_CLEARDATAOUT values to the physical GPIO seen by the oscilloscope that really only works on a 10nS cycle.
This isn’t so bad as it generally requires two-instruction countdown loops to control the delays as in the example above — although you can get to single-cycle resolution with a once-off branch across an optional singl-cycle instruction that runs in series to the delay loop.
There’s probably no way to discover the phase of this GPIO update process against the PRU cycle, which is a pity.
Many thanks to hipstercircuits for parts of all these examples. In fact his example of an accelerating profile implemented by a table of precalculated values accessed by the PRU leads me to imagine a system where we feed a circular buffer of delay times, wire the signals into our 42Volt servo motors via an H-bridge and get them to play music or speak words.
But while I have this test harness going, it’s worth corroborating the awful read functionality mentioned by the authors of pru_generic.p above and insert the following lines after the DELAY1 loop
#define GPIO_DATAIN 0x138 MOV r3, GPIO1 | GPIO_DATAIN LBBO r2, r3, 0, 4 MOV r2, 128
The result is:
That's 300 nS (note the change in horizontal scale), or 170 nS in excess of what it ought to have been, which matches the observation. (I have no idea what he means by L4_PERPort1 being idle.)
This is a problem because factoring this kind of delay into the code is not going to happen. It feels like there's a bodge going on as the PRU has its clock brutally put on hold when it accesses certain segments of memory while the system calls out to a non-integrated unit to get the data before releasing it -- when the PRU could at the very least have been allowed run asynchronously for 30 instruction cycles until the data was ready.
Indeed, it seems like there should be no reason for directly accessing the inputs at this level, because just like the brain, there are numerous special units for preprocessing the signals within the Enhanced Capture Module.
In particular, there's the Enhanced Quadrature Encoder Pulse module for handling all the signals returning from the servo motor. Here's one of the diagrams from the manual:
It's almost as if they've built a whole servo motor drive apart from the H-bridge into this one chip, where this unit takes the feedback and the PRU generates the complicated PWM drive cycle.
Even better, it seems like there are three of these independent units on board, so theoretically all three servo motor drives we currently buy in at the retail of $60 could be implemented with this one chip plus three H-bridges and a little bit of smart PRU code.
The advantage of getting all the servo motor drives into one unit isn't so much about the cost savings as the fact that they can potentially respond to one another.
So instead of each motor driver struggling independently to attend to position, when one falls out of tolerance due to the speed or forces encountered it can communicate to the other two axes to slow down and give it time to catch up so that the head of the machine remains exactly on course. Under the current configuration with completely independent motor drivers, this is not an option, so everything needs to be run at an absolute slower speed to avoid overloading things and maintain tolerance.
On the other hand, independent motor drivers are simple to interface and can be sold as a commodity.
Rolling it all into one unit would probably result in far higher complexity (eg a 5000 page manual) and many fewer sales. As is clear, the hardware complexity is now supplied for less than a $35 beaglebone, so the only thing lacking is the software. This can only be done open source due to the intense customization required which source code access enables and the lack of investment available for the years of risky unprofitability it would take to develop under the circumstances where established solutions already exist and all the work would lost and wasted if a new venture failed to lead to a less than wholly marketable product. Free software is the only realistic way that the potential of this complex tech gets unlocked and turned into productivity by incremental aggregation. It would help if this vital work wasn't consigned to the margins of the economy while all the money and therefore employment kept flowing to the likes of Autodesk to be squandered on financial games in which the organization of the effective service of human needs by technology is not the measure of success.
As Henry Ford wrote 100 years ago:
I do not know whether bad business is the result of bad financial methods or whether the wrong motive in business created bad financial methods, but I do know that, while it would be wholly undesirable to try to overturn the present financial system, it is wholly desirable to reshape business on the basis of service. Then a better financial system will have to come. The present system will drop out because it will have no reason for being. The process will have to be a gradual one.
Title: Fast Radio Bursts and Radio Transients from Black Hole Batteries
Authors: Chiara M. F. Mingarelli, Janna Levin & T. Joseph W. Lazio
First Author’s Institution: California Institute of Technology & Max Planck Institute for Radio Astronomy
Status: Accepted to ApJ Letters
About fifty years ago, a young Jocelyn Bell was reviewing her daily dose of radio data — 30 meters of data printed on chart paper which had to be tediously scanned. She found something very odd: a pulse in the data that repeated every few seconds. This astronomical phenomenon quickly became the hottest topic of the late 60’s: Were these signals terrestrial? Little Green Men? We later learned that these radio pulses signals were the result of pulsars — neutron stars whose magnetic field beam in our direction like a lighthouse as the neutron star rotates.
Flash forward to today, and history seems to be repeating itself. Astronomers have once again discovered something very odd: extremely bright and short radio (nonperiodic) pulses from the sky. These have been dubbed “fast radio bursts.” As of this article, we’ve detected eleven bursts, but we’re still not sure what is behind these strange signals. Theories have again ranged from terrestrial noise to signaling aliens. Today’s authors suggest a familiar end to the mystery: pulsars.
A Battery Powered Neutron Star
Unlike pulsars, which can be detect without a stellar partner, pulsars which cause fast radio bursts will need friends. Specifically, they need to be in a binary with a black hole. Neutron stars paired with black holes do not behave nicely like Tatooine’s twin Suns. Instead, these massive objects are continuously spiraling towards each other while they irradiate energy as gravitational waves. Eventually the pair nearly touches and the neutron star will be sucked up into the black hole.
Seconds before the collision, the neutron star might actually become battery powered! Here’s how: The black hole and neutron star are rapidly revolving around one another, which means that the black hole passes quickly through the neutron star’s magnetic field. As the black hole moves through this field, it induces an electromotive force (emf) between its own surface and the surface of the neutron star. The magnetic field lines of the neutron star act like wires, carrying charged particles from the black hole “battery” and vice verse. The authors think that the movement of these charged particles releases radio waves.
The battery power increases as the neutron star and black hole get closer together, but the radio waves will only be visible to us for a few milliseconds. The authors point out that this time scale is so short that the time it takes our cameras to click! will affect how much of this battery light we can capture. Figure 2 highlights this effect.
So could this theory explain all of the fast radio bursts? Probably not. The authors admit that the black hole battery scenario should occur 1000 times less often than fast radio bursts. However, they might explain fast radio bursts with a distinct feature: double pulses. After the battery runs dry and the black hole absorbs the neutron star there will be a second pulse of radio waves. This theoretical marvel is known as a blitzar.
Even if this theory doesn’t explain all (or any) fast radio bursts, this is an astrophysical phenomenon which might very well exist. Future theoretical studies can point to other signatures of such a system, and maybe one day we can detect the very first black hole battery.
Award-winning astrophotographer Adam Block shares stunning images of a few rarely-imaged pieces of our universe.
November 13, 2015
Alyx and I'll be posting the occasional note here over the next few weeks, because Charlie was kind enough to hand me the mic. I thought I'd start with a long, musing whimsical thing about mincing subgenres and the nature of ecofantasy, because my upcoming book A Daughter of No Nation lies within that particular subgenre--when it's not passing for portal fantasy or a pirate story or crime fiction with magic.
Sadly, the opening of that essay is wayyyy too stuffy, at present, and needs to be beaten with a sack of oranges. Don't worry, I'll fix it before you see it. Anyway, I should introduce myself first, right?
So--official details: I'm in Toronto, I have gobs of stories out along with the four ecofantasy novels, the first two of which, Indigo Springs and Blue Magic, are chock fulla magically mutated animals, magical objects and queer folk. Seriously. I mention this last because a) I have the exceptional good fortune to be incredibly gay married to author Kelly Robson; b) my most recent book, Child of a Hidden Sea, was to my utter delight and astonishment nominated for a Lambda Award this year. The above-mentioned A Daughter of No Nation is its sequel. There will be a third; its current title is The Nature of a Pirate.
Unofficially, here are five random medium-known facts about me:
- The last four albums I bought were by Charlie Brand, Lord Huron, Corb Lund, and The Kills. The one before that was the Across the Universe soundtrack.
- In person, I have a severe case of potty mouth and tend to use the Effbomb, as it's charmingly euphemized by the parents of preschoolers, in place of a comma.
- I will alwaysAlwaysALWAYS click on the kitten video. Even if I was the one who uploaded it.
- I will never click on the current news story, unless it is about climate change or other green stuff. I am not following the U.S. election. That war? No clue. Worrying too much about the state of the world, you see, makes it impossible for me to write. (I did try following the recent Canadian election and that was bearable, on a par with eating cold polenta because it let you get through a particularly trying day without having to cook, but I don't think it's an experiment I'll repeat anytime soon.)
- Perhaps as evidenced by the polenta comment, I have occasionally been accused of committing surrealism.
- I am, nevertheless, a kick-ass story doctor and teacher.
- I am easily distracted. If you hate the idea of an ecofantasy essay, wave something shiny under my nose, preferably in the form of a question.
- It's possible that counting to five may not be my strong suit.
Put another way, I'm happy to be here and look forward to talking to you all!
Earlier this year, when I was working with Jamie, Tom, Anna, Paul, Stephen and Adam on a vision for Government as a Platform, I got stuck on the Central Line on the way back from work and ended up trying to distill all the things the team were talking about. The list below was the result.
Split data from services. Hold it in organisations with appropriate accountability (central government, local government, professional bodies) and make the quality of the data independently verifiable.
Services can be provided by any layer of government, and by commercial or third sector orgs. It's OK when they overlap, complement and duplicate.
It is possible to interact with multiple layers of government at once while respecting their
organisational anddemocratic sovereignty.
Build small services that can be loosely joined together however citizens like. Do not try and model the whole world in a single user experience, you will either fail or build a digital Vogon.
Put users in control of their data. Millions of engaged curators are the best protection government has against fraud, and that citizens have against misuse.
A user not having to understand government does not mean obfuscating the workings of the system.
The system should actively educate people about how their democracy works and where power and accountability lie. Put transparency at the point of use.
Be as vigilant against creating concentrations of power as you are in creating efficiency or avoiding bad user experiences.
Understand that collecting data to personalise or means test a service comes at a cost to a users time and privacy.
Sometimes the user need is 'because democracy'.
To repeat the intro, definitely not all my own work, but they are my words, so where this is wrong it is my fault.
Next Tuesday (November 17th), my partners Brad and John and I will be in Berlin for an evening event to talk about entrepreneurship, the Internet, and whatever else is of interest to the audience. Somewhat to our surprise the event sold out within an hour and there is a waitlist. I believe there is a little more capacity so hopefully we can fit everyone. To make sure that we cover topics that people want to hear about: if you are planning to attend please add questions (or comments for that matter) to this Hackpad.
The undergrad research series is where we feature the research that you’re doing. If you’ve missed the previous installments, you can find them under the “Undergraduate Research” category here.
Did you do an REU this summer? Or maybe you’re just getting started on an astro research project this semester? If you, too, have been working on a project that you want to share, we want to hear from you! Think you’re up to the challenge of describing your research carefully and clearly to a broad audience, in only one paragraph? Then send us a summary of it!
You can share what you’re doing by clicking here and using the form provided to submit a brief (fewer than 200 words) write-up of your work. The target audience is one familiar with astrophysics but not necessarily your specific subfield, so write clearly and try to avoid jargon. Feel free to also include either a visual regarding your research or else a photo of yourself.
We look forward to hearing from you!
University of California, Santa Cruz
Megan conducted this research at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University with the guidance of Dr. Philipp Mertsch. Megan was a SULI (Science Undergraduate Laboratory Internships) intern during the summer of 2015, and she is currently a rising junior.
Modeling High-Energy Gamma-Rays from the Fermi Bubbles
In 2010, the Fermi Bubbles were discovered at the galactic center of the Milky Way. These giant gamma-ray structures, extending 55 degrees in galactic latitude and 20-30 degrees in galactic longitude, were not predicted. To develop a model for the gamma-ray emission of the Fermi Bubbles, we assume that second order Fermi acceleration is responsible for the high-energy emission of the bubbles. By solving the steady-state case of the transport equation, I compute the proton spectrum due to second order Fermi acceleration. I compare the analytical solutions of the proton spectrum to a numerical solution. I find that the numerical solution to the transport equation converges to the analytical solution in all cases. The gamma-ray spectrum due to proton-proton interaction is compared to Fermi Bubble data (from Ackermann et al. 2014), and I find that second order Fermi acceleration is a good fit for the gamma-ray spectrum of the Fermi Bubbles at low energies with an injection source term of S = 1.5*10^(-10) GeV^(-1) cm^(-3) yr^(-1). I find that a non-steady-state solution to the gamma-ray spectrum with an injection source term of S = 2*10^(-10) GeV^(-1) cm^(-3) yr^(-1) matches the bubble data at high energies.
This is the first major meeting since Dawn's arrival at Ceres, and despite competition with Pluto surface science there was a well-attended Ceres talk session on Monday and poster session on Tuesday.
November 12, 2015
TV Bit of an inside announcement only of interest to a miniscule number of you, but I awoke the other morning to the news that Amazon Video now has an app on Roku boxes on the UK.
This will be of little interest to US users of this useful little black box who've had access for years, but for some reason, in the UK, Amazon refused to budge forcing us, or me, to use the astonishingly ropey app on the Sony blu-ray players which began life as the Lovefilm streamer and a hopelessly slow version on a television which took at least five minutes to reach a stream which was less than HD.
Eventually I bought an Amazon Fire stick but that's never been ideal due to frame skip and immensely obtuse interface which mixes films with music and games and other bits and bobs. Plus its on-board wifi receiver isn't as strong as it could be.
The Roku app uses the basic on board structure utilised by most of the streaming apps and ironically resembles the earliest version of Netflix.
But it's entirely fictional and the picture quality is great.
Watching the charming Daniel Radcliffe/Zoe Kazan meet-cute, The F Word or What If? (depending on your territory) tonight, on a couple of occasions I forgot it was a stream and glanced at my blu-ray player wondering why it wasn't switched on.
If only either Amazon Video or Netflix had a decent back catalogue I might consider cancelling the postal dvds. But they don't, yet, so I won't be.
It occurs to me that not everyone who reads my scrawls uses Twitter where I've been plugging this twice a day, so I thought I'd best post it again just in case. Current total runs to one contribution plus a couple of promises so there's still all the play for.
Film For this year's annual review on this blog, let's collect our experiences of seeing films.
As you know, each week I've been posting about my favourite film of a given year, though only now and then do I actually offer anything close to a review.
But instead of simply posting reviews, usually I've talked about the experience of seeing the film for the first time, or what the film means to me in ways which sometimes have nothing to do with the film.
I thought I'd extend an invitation for others, for you, to do the same and publish the results during December.
What I'd like, if you have the time, is for you to choose a film, a favourite though it doesn't need to be and write about how you saw it, what happened and what it meant to you.
Could have been at the cinema, on television, video, dvd, blu-ray, streaming, whatever.
IMPORTANT: This isn't about just reviewing films, although obviously you might end up reviewing the film depending on what you're writing about.
You could write about the people you saw it with, where you saw it, an incident which happened during the screening.
Or it could be that the film sparked some else off, or its existence triggers memory or there's some element of the film you particularly enjoy like the soundtrack or a particular actor and you could write about that instead.
Or it could be that you had to battle somehow to see a film. Could be that you even decide to write about the reason you didn't see a film. For some reason.
IMPORTANT: It needn't be about something which happened this year. Although it could be, if there's something you'd especially like to talk about.
Oh and don't worry if I've already written about the film.
Length: As little as a paragraph. As long as it needs to be, so you can write and write and write. There isn't a word limit. If someone else thinks it's tl;dr that's their problem.
I'll illustrate each piece with still from the film. If you'd like to suggest something, that's excellent.
As ever this is just a guest blogging thing for fun, but if you have a project or some other thing you'd like be to plug, then let me know and I'll add it in at the bottom along with your Twitter details or personal website or what have you.
Send your entries to email@example.com with a subject line "Review 2015".
Weird to be Frank for a change. Oh well. Hi, I've got a story I want to share with you.
First, about the Heteromeles identity: Back when I first started posting comments here, I was working in an environmental consulting company that had a rule that they owned all their employee's creative output. My solution was to use "Heteromeles" for my private online activities and my own name for stuff I did for the company, so that if they ever did want to claim ownership, it was obvious how far they were over-reaching (they never did, of course). Heteromeles, for those few who haven't googled it, is the genus of my favorite plant: toyon, the "holly" that gave its name to Hollywood, which isn't too far from where I grew up. Toyon's not a holly (it's closer to photinias and hawthorns if you care), and it's known among botanists as the only woody plant in California that kept its Indian name. By the time I started freelancing, I decided to keep Heteromeles as my online identity more for the sake of continuity than anything else.
So about Hot Earth Dreams: Back in late 2012, I'd gotten well and truly sick of the Mayan Apocalypse (remember that? What were they thinking again?), and started wondering what it would be like to write a novel set in the deep future on a climate-changed Earth ...
...Well, one more digression. I've written two self-published novels, and I'm working to become a commercial author. Thing is, I'm an ecologist at heart. What I really love is world-building, starting from the physical constraints, working up through the geology, vegetation, lifeways, technology, and ending ultimately in the characters, whom I firmly believe are strongly shaped by their environments. I realize that this is almost diametrically opposed to the way most writers work, but this is relevant because...
... I started by asking myself the question: what will the Earth look like if severe climate change happens, and humans survive? At that point, my brain froze, because even though I thought I knew a lot about climate change, I couldn't find the words or even an image. This future was, quite literally, unspeakable. I couldn't articulate anything. It felt like my mind hit a wall and stuck. I had the intuition that my scientist and environmentalist friends were as blind as I was and that scared me, because it seems like a very likely future So I asked around, and my intuition was right. It became my great conversation killer, asking the question and watching people freeze and go silent, watching parents take long, sad looks at their children playing nearby, before they changed the subject. The best anyone could offer was "it'll look like Waterworld." I stopped asking.
But, because I'm a crazy ecologist and world-builder, I thought I could answer that question. I also realized that people probably wanted to know the answer more than they wanted a novel based on it. So, with a great deal of trepidation, I started reading, and most of three years later, I'd finished Hot Earth Dreams
As my friend Matt put it, this is a sourcebook for the deep future. It's 42 chapters, 900-6,000 words each, designed for people with short attention spans who want to easily find ideas once they've read them. This isn't a future history. Rather, it's what you need to know and think about to write a future history. I wanted to make it easier for people to talk about such a future, to dream about it. That's part of where the title came from. Another reason it's called Hot Earth Dreams is that it's pure speculation, a conceptual model, a what-if story that happens to be formatted as a non-fiction book. It's well-grounded speculation of course, but as most of you know, any congruence between futurist dreams and the actual future is more due to random luck than anything else, and I wanted to make that clear. I'm looking to inspire people to think and create, not to recruit acolytes. It's also a unique title, so it's easy to find.
You'll find it useful if you want to write a science fiction or fantasy story set in the deep future, if you want to create a game, a comic, or art, or write up a scenario for your boss. You'll also find it useful if, like me, you're trying to figure out why conservation still matters under climate change. If you're a climate activist suffering from pre-traumatic stress disorder, perhaps this will give you a reason to struggle on. Even though this is a very pessimistic book, everyone who's read it so far thought it was uplifting. Apparently, being able to speak about a scary future is a good thing.
The critical thing is this: I couldn't have done it without you, the regulars on Charlie's blog. Thank you all! This book is the result of one of those "strange attractors" that dominates the deep reaches of Charlie's posts, when the original topic has long derailed and people start talking about other stuff. Over the years, I've floated various ideas that are now in Hot Earth Dreams just to see how people would respond. Sometimes they went over well, sometimes they didn't, sometimes someone would point out a flaw that caused me to rewrite a section. Without this feedback, both positive and negative, I could not have written this book, so thank you all, individually and collectively, for helping me.
And yes, please buy this book. It's self-published through Createspace, but that's due to the changing nature of the non-fiction publishing world. According to what I learned, in non-fiction publishing, whether someone buys your book proposal is less about the quality of your manuscript and rather more about how big your existing audience is. This is why celebrities can churn out books so fast—they cite their number of twitter, blog, etc. followers as part of the proposal and get their manuscripts published under their assumption that all of their fans will want one. I'm trying to break in the back way, which is self-publishing to see how well it does, then shopping the proposal. If enough people buy this book, it becomes commercially viable, which means I can work with a publisher to get the next edition into bookstores, libraries, book reviews, dorm rooms, and onto everyone's radar, in other words level up. If you buy this book, you will be one of those early adopters who determines its fate. If you like it, please review it online, recommend it to your friends, help it spread through word-of-mouth. You can also provide feedback, both positive and negative, at my blog, which is (of course) http://heteromeles.wordpress.com.
Hot Earth Dreams is now on sale at Createspace (https://www.createspace.com/5799140)
on Amazon (http://www.amazon.com/Hot-Earth-Dreams-climate-happens/dp/1517799392) and on Kindle on November 13 (https://www.amazon.com/gp/product/B017S5NDK8)
I'll update this post as soon as it goes on sale at Amazon (which should be this weekend or Monday).
You can read the first five chapters here.
Thank you again for helping me get it written.
The original purpose of the Wall Street stock market was to raise capital for big investment projects, such as railroads and steel plants. Later on it became a system for owners to take money out of their companies to fund lavish personal lifestyles by selling on their shares to speculators. The activity has about as much to do with investment as the Department of Defence is to do with defence.
There has to be a downside to turning shares into cash-money, and that downside is that you can sometimes lose control of your company.
Amazingly, due to the powers of no-regulation, there’s a way round this problem, as can be seen from the shenanigans at Google by the fabrication of special kinds of shares:
Owners of Google Class A shares — ticker symbol GOOG through Wednesday — will get an equal number of new Class C shares. Those Class C shares will get the GOOG symbol, while the Class A shares will trade under the symbol GOOGL.
Why bother? The new Class C shares have no voting rights. The Class A shares have one vote each, but collectively those votes are dwarfed by the 10-votes-per-share Class B shares. Those shares, which do not trade in the public market, are owned by Google insiders, who will also get Class C shares in the distribution.
As originally proposed by the company, the move would have made it easy for Google’s founders, Larry Page and Sergey Brin, and the chairman, Eric Schmidt, to cash in a large part of their holdings without giving up their voting control. But that ability has been limited after the company settled a class action suit filed by angry (Class A) shareholders, and reached agreements with the three top officials to limit their sales.
So the total number of votes will not be rising, and that will delay the day when the company’s leaders lose voting control of the company. Currently they own less than 16% of the company’s shares, and have 61% of the votes.
I don’t know what the technical situation is with Autodesk’s shares, but they’ve been listed a long time and games like this might have already played out.
I’m on the case because I’d just heard that the directors of Autodesk may be panicking that their cosy self-appointing clique of millionaires and lines of yes-men in their employ could soon be shaken up. Former McKinsey&Company analyst Scott Ferguson‘s activist hedge fund Sachem Head raised $800 dollars in 2013 and is off to make mischief through the acquisition of minority shareholdings in companies on the stockmarket.
This compares in scale with Autodesk’s recent bond issues for $750million in 2012 and another $750million in 2015 used for the purpose of sudden 100% buyouts of companies, whereupon their wonderful managers can be let loose like little children in a toy store with nobody to see how bad they’re behaving.
My first thought was that this might be an action like that of Dan Loeb investing in Yahoo, getting on the board and agitating for the recruitment of Marissa Mayer as CEO as someone who actually had clue about modern software product development and could turn the company around after years of paranoid corporate management by characters like Carol Bartz who cut her teeth running Autodesk for 14 years prior to her term.
But that would be hope triumphing over experience that the corporate-financial-governance world could deliver something positive from the perspective of world wealth and productivity.
As they say in natural history science, the past is the key to the future.
Reviewing the records before this month, I note that Hedgefundtracker in August wrote:
During this quarter, Scott Ferguson’s Sachem Head Capital Management Lp started new positions in Allergan Plc for $295.87 million and PTC Inc for $91.27 million. These were the 2 biggest new positions…
[His] fund disposed its stakes in Actavis Plc (ACT), Salix Pharmaceuticals Inc (SLXP), Mylan N V (MYL) and Bio Rad Labs Inc (BIO). These stocks constituted 15.77%, 8.31%, 7.64% and 2.51% of the portfolio, respectively. We can only speculate about the reasons for the dumping but we believe it has to do with either value, momentum or a better place for Sachem Head Capital Management Lp’s capital.
While an acquisition of a company by Autodesk is allegedly for the purpose of technology synergy or competitive disruption, a hedge fund acquires and manages companies for the purpose of disposal of the remains at a higher price. One can discover the competence and style of these actions by looking at what happened to the tech or the capital a few years after the exciting moments of acquisition long after it has fallen from the news.
Let’s take these reported disposals in order by consulting Mr Wikipedia.
Actavis (formerly known as Actavis, plc, prior to the acquisition of Allergan, inc) is a global pharmaceutical company… On June 15, 2015, Actavis, plc changed its name to Allergan…
I can’t work out who owns who, but it’s obvious that the so-called “new stake” in Allergan probably cancels the supposedly disposed stake in Activis. It’s confusing to the financial journalists who seem not to have developed a system.
Let’s check Mr Wikipedia’s record on Allergan, inc then:
On April 22, 2014, details were released by Valeant Pharmaceuticals and hedgefund CEO, Bill Ackman, about a $46 billion offer presented to Allergan… Allergan, Inc stockholders would own 43 per cent of the combined company. This bid was rejected by Allergan as being too risky, claiming Valeant’s business model of serial acquisitions and low organic growth being unsustainable… On May 31 the offer was revised and increased to $53.3 billion. On June 18, Valeant began its tender offer for a hostile takover of Allergan.
In August 27, 2014, Valeant and Pershing Square Capital Management asked a Chancery Judge to set a trial for September 24, 2014 to decide on whether Valeant and Pershing had properly secured enough support from Allergan shareholders to force a meeting of investors to consider replacing a majority of the company’s directors. On the same day Allergan announced that they had set a December shareholder vote to decide whether the company should replace part of the board of directors.
In the afternoon of August 27, Bloomberg reported that Valeant and Pershing Square had won their case with the Chancery Judge setting an October 6 date for the aforementioned trial.
On November 17, 2014, Actavis, plc announced it would acquire Allergan in a white knight bid for approximately $66 billion, putting an end to Valeants hostile takeover attempt.
Hmm. Something familiar about those names from an earlier reference:
Scott Ferguson, a protege of Pershing Square Capital Management’s William Ackman, has raised more than $800 million less than four months after launching his new activist hedge fund, according to people with knowledge of the firm.
Ferguson — who was Pershing Square’s first analyst, eventually becoming a partner — left the New York-based firm after nine years in 2012 to launch Sachem Head Capital Management. Ackman was among Ferguson’s initial investors, according to reports.
Anyways, here’s a short snippet from the press about what it felt like to be the football in this match:
David Pyott (CEO of Allergan) got on the defensive when Ackman, who had a huge stake in Allergan, tried to force the takeover by Valeant. Pyott accused both Valeant and Ackman of insider trading and said Valeant was “vile.”
While the story ended with Allergan being bought, it was by Actavis, not Valeant at $219 a share rather than Valeant’s offer of $152 a share. In addition, Actavis respects Allergan’s business culture and won’t cut its R&D budget.
Pyott said he is often consulted by other CEOs whose companies are under attack by the likes of Ackman.
Still, Ackman was “rewarded” for his efforts; although the horse he backed, Valeant, didn’t win, as a large shareholder in Allergan, he managed to drive up the price of Allergan through the takeover drama, and emerge as one of the most successful hedge fund managers of 2014.
Moving on to Salix Pharmaceuticals (disposal number 2):
In February 2015 Valeant Pharmaceuticals announced it would acquire Salix for $15.6 billion, creating the U.S. leader in gastrointestinal drugs. The deal was completed on April 1, 2015.
Here’s a snippet of news on Mylan (disposal number 3):
In April, Teva has been pursuing a hostile takeover bid for Mylan N.V. for more than $40 billion. Last week Mylan invoked a “poison pill” to avert the takeover, when the Dutch fund Stichting exercised its call option to temporarily takeover Mylan and protect it from being bought…
[Instead Teva is set to buy the generics unit of Allergan plc, in a deal worth $45 billion.]…
If the Allergan unit is spun off and merged with Teva, then Mylan’s $35 billion takeover bid for Perrigo Company could go ahead, as planned.
What do we know about Bio Rad Laboratories (number 4)?
Bio-Rad Laboratories Inc.’s former general counsel hit it with a suit in California federal court Wednesday claiming the life sciences research company illegally fired him after he reported that its leadership may be engaging in bribery in China.
Until 2009, Bio-Rad made $7.5 million in improper payments to Russian, Thai and Vietnamese officials to win business. A federal investigation was resolved when the company agreed in November 2014 to pay nearly $41 million in disgorgement and interest to the SEC and $14 million in fines to the DOJ.
What’s going on here?
It’s like there some kind of James Bond adventure rampaging through the pharmaceuticals industry. Are these two dudes working as a team or as cut-throat competitors? The relationship is disclosed with this story about Zoetis:
Fresh off a big win from investing in medicine for humans, Bill Ackman is now focused on pharmaceuticals for animals.
Pershing Square Capital management recently disclosed an 8.5 percent stake in Zoetis, which makes animal medicine and vaccines that are bought by veterinarians, livestock farmers and other animal owners….
Pershing Square is working on Zoetis with Sachem Head Capital, a more than $2 billion activist investor led by Ackman protege Scott Ferguson. Together, the two hedge fund firms own about 10 percent of the company. Pershing Square said it paid $1.54 billion for its stake, according to a news release.
The reason the past is the key to the future is that people’s skills and experience take a long time to acquire, and that’s what you’ve got to use. It’s a hard fact of human information processing and brain coding and as much part of a man as his language and friendship network.
What’s with these name?
Well, clearly Pershing Square Capital Management takes the name from a concrete plaza outside the train station 18 blocks away from their office in Manhattan. The office of Sachem Head is 8 blocks away.
A Sachem is a native American paramount chief. The word was borrowed by an organization known as Tammany Hall, which dominated New York City politics in the 19th century, as the name for their leaders. This is my guesstimate as to the source for the name Sachem Head. So maybe he wants to get into politics.
This is very American and a big problem for the following reason.
The reason these pharmaceutical companies are so profitable and can deliver minimum service at maximum cost, is that governments grant them patent monopolies to inflate prices by over 5000%. This is justified on the basis of their incredibly inefficient research. But really it gives them huge revenue streams for the financial boys to play with, and deadly conflicts of interest that would be illegal under normal circumstances without the protection of political cover and the ability for the executives to buy themselves out of criminal proceedings by buying out the prosecutor with shareholder money.
It’s a very special industry, perfectly suited for Tammany Hall.
The software business is known to Sacham Head through a small stake in PTC (another CAD company), and CDK Global, which sells car dealership software.
My intellectual interest would lie in determining whether their talents lie in squeezing more money out of the customers who are trapped by their needs or in the production of better products that are more competitive on the open market.
I haven’t got time to look into this anymore today. I’ve just got a Beaglebone Black which I really should have been playing with all morning in an attempt to do something useful with my time. (The only thing I can do about the corporate finance world is to keep myself and my work right out of it.)
Maybe these hedgies will port their know-how from the pharma industry and turn Autodesk into the world’s biggest software patent troll to go after this multi-billion dollar wealth-destroying business.
That would be funny in light of all my complaints to the management at the time about how they were taking out patents, and their assurances that everything was okay, because “We’re not like that,” they said.
This is a bit like being content with the government spying on all our data to protect public safety and the national interest, which is okay until the government changes into something much less benign and it’s all too late.
Even if these speculations are true, it doesn’t help anything because there’s never a reward for being right. The only thing that matters is power.
I'm excited to be travelling to Washington DC today to take part in a 3 day USAID co-creation workshop.
Authors: Xavier Dumusque, Alex Glenday, David F. Phillips et al.
First author’s institution: Harvard-Smithsonian Center for Astrophysics
Status: Accepted to ApJ Letters
Planets or Starspots?
1642. That’s how many planets have been confirmed orbiting other stars. The two main ways to find exoplanets are the transit and radial velocity methods. The transit of a star as a planet passes in front of it causes that star to dim slightly. This dimming is detected by telescopes like the Kepler Space Telescope. The radial velocity method uses the gravitational pull of a planet on its host star as it orbits. The planet causes the star to wobble back and forth slightly. A spectrum of the star will shift to bluer colors when the star wobbles towards us and redder colors when the star wobbles away, indicating a shift in the radial velocity of the star along our line of sight. This periodic shift can be used to infer the presence of an orbiting planet.
Both of these methods have problems. Stars, like the sun, are not uniformly bright disks. They have dark spots, linked to magnetic fields and coronal mass ejections, which can masquerade as transiting planets. Other internal effects from the star can also cause changes in the observed radial velocity, confusing our attempts to detect planets that way. Since in most cases we can’t see the planet directly, it is difficult to disentangle stellar effects, like star spots, from the planet’s influence. To discover an Earth-size planet with a year-long orbit around a sun-like star, we must measure a radial velocity shift of just 10 cm/s (about the speed of a sloth!), which is several times better than has been achieved so far. We must account for the complex stellar effects that produce radial velocity shifts. The trouble is, the stars that host exoplanets all appear only as points of light. We can’t see the features of the star like starspots that lead to these velocity shifts.
Dumusque et al. decided to treat our own Sun as if it was a potential exoplanet host. They pointed an instrument built for discovering exoplanets at the Sun, measuring its radial velocity shifts over several days. By comparing the radial velocity measurements with high-resolution images of the Sun they hope to tease out the correlation between activity on stars and the velocity measurements. These calibrations should help advance the radial-velocity frontier toward the 10 cm/s precision needed to discover Earth twins.
The Stellar Jitter
There are several types of effects that can cause velocity shifts in stellar spectra, called stellar jitter. Because of the competition between gravity pulling inwards and pressure pushing out, stars can oscillate like a drum. This causes a periodic shift in the radial velocity of the stars. Stars also experience convection. Like a boiling pot of water, the surface of a star is a patchwork of hot regions moving upward, cooling off, and descending again. Starspots can also cause a shift in the radial velocity of the star. As they rotate with the star, they will obscure parts of the star that are moving towards or away from us, causing a shift in the measured radial velocity of the star. All of these effects are hard to characterize on a distant star, but we can see them happening in real time on our Sun!
The Sun as a Star
The authors built a simple solar telescope to use with the High Accuracy Radial-velocity Planet Searcher North (HARPS-N). HARPS-N is a spectrograph built to take spectra of stars in order to measure the radial velocity shift induced by orbiting planets. In the case of the Sun, the authors were interested in measuring the radial velocity shifts across a few days due to the stellar effects mentioned above. Their results are shown in Figure 1. This plot shows the radial velocity of the sun measured during several periods over the course of about a week. The scatter in these observations is much greater than the error-bars on the points from instrumental uncertainty. This means that the scatter is driven by real effects on the Sun. The authors used images from the Solar Dynamics Observatory (Figure 2) to investigate the activity of sunspots during this time. The highest velocity shift observed corresponded to a Sun with lots of activity, as expected.
By characterizing the relation between solar activity and the solar radial velocity, the authors hope to be able to improve their techniques for subtracting these solar effects from the velocity data. They predict that they will even be able to detect the pull of Venus on our Sun in the next couple years. Applying these lessons from the Sun to stellar radial velocity measurements should likewise allow us to gain enough precision to detect Earth-twins.
SpaceX has completed development testing on its SuperDraco propulsion system, used to propel the company’s Crew Dragon spacecraft away from a Falcon rocket in the event of a launch failure.
Planetary scientist and dust devil expert Ralph Lorenz describes how the upcoming Mars InSight lander's sensitive seismometer might be able to detect dust devils.
November 11, 2015
Life It's been an interesting day, many demons laid to rest for reasons inside the limits of an adjunct to the blog rules (original posted back in 2002) so I hope you'll indulge me in some nostalgia.
Here's Alanis being a sport on the James Cordon show with updated lyrics for Ironic.
She's been around the chat shows lately, presumably because it's the twentieth anniversary of Jagged Little Pill, which she also has an essay about on her website. The anniversary deluxe boxed set seems like a more fitting tribute than the bizarre covers album released in Starbucks (and which I spent a lot of time plaguing baristas about that year).
Regular readers of this blog's comments will be familiar with Frank Landis, although not under that name—he comments here under the alias Heteromeles. Frank is a part-time environmentalist, a part-time consultant, a house-husband and a writer; he's one of the people who earned a PhD, in his case in botany (he's a plant community ecologist and mycorrhizast by training, with a background in environmental science), failed to land a job in academia, and got downsized out of the business world by the Great Recession. He's guest-blogging here tomorrow about his next book, Hot Earth Dreams, a look at how the Earth's biosphere is likely to change in the wake of anthropogenic climate change ...
After Jeb Bush gave away his private email address to students a rally in New Hampshire so that they could ask him questions, one of the students put it on Reddit so that he really could speak to the multitude. Naturally he was bombarded straight away with the usual deluge of things that people in the internet like to throw around, and to his credit he actually started to respond to some of them, even the stupid ones. One email he received asked him “if you could go back in time and kill baby Hitler, would you? I need to know.”
“Hell yeah I would!” Jeb answered. “You gotta step up, man.”
Going back in time and killing Hitler is a traditional fantasy in western liberal democracies like America and the UK, recently brought to the surface by NYT Magazine’s poll on a slow news day asking readers if they would go back and kill baby Hitler to save the world. It’s a pretty old meme. Imagine the pain and suffering that could be averted! There’d be no Holocaust, no WWII! If we killed baby Hitler we could outflank Ben Carson’s own insane alternate history where all the Jews have guns and stop the holocaust by protecting themselves from big government. Or with no Hitler we could reenact the endless Cold War depicted in Command & Conquer: Red Alert!
This was the apparent reasoning of the greatest section of people who answered the poll, winning 42%, while only 30% responded that they would not kill baby Hitler. The rest weren’t sure. I want to take issue with those 42%, and Jeb, because however scared we all are of fascism, baby Hitler is the wrong target.
To begin with, we have to wonder what Jeb and others think causes a fascist dictator to come into existence. Was little Adolf already a seed of pure evil when his mother Klara gave birth to him? Perhaps she dropped him at 6 months old and he just sort of turned nasty at that point? The ambiguity of killing a baby is already unsteady enough when it comes from a pro-life presidential candidate. Is Jeb saying that Hitler was just born bad from the get go and should therefore be terminated before he can ruin the world with his inherent degeneracy? I hate to Godwin, but…that sounds suspiciously similar to the basic premise of eugenics.
By saying that Hitler was already bad as a baby, we’re saying that evil men, violent men, do not become so because of the world around them. In the case of Hitler, what is it about him that we’re against? If it’s because we don’t like violent state repression, surveillance, racism, the dangerous entitlement complex of the mediocre white male down on his luck, economic collapse…then why blame a baby for that? Capitalist crises and ethnic cleansing existed long before Hitler, and they got worse well after he was born.
And this lets white supremacy off the hook. Let’s face it: racist violence is an undeniable aspect of white European history. We’ve been at it for centuries. Even ignoring hatred of Jews in the ancient world, Christian antisemitism was prominent since medieval times, with feudal lords whipping up pogroms and expulsions of local Jewish populations as standard operating procedure for a very long time. When the evil non-science of eugenics rose to prominence in the nineteenth century, racial antisemitism took up the old baton, emphasizing supposedly biological characteristics of Jews in order to justify their elimination. Hitler invented none of these, and while he is certainly responsible for exacerbating them and for being at the helm of the greatest such crime of its kind on European soil, he did so on the crest of a nationalist wave that had been rising for some time.
The reason killing baby Hitler seems like such a neat way to let out all that steam is because we are still in denial about the inherent oppressiveness of Eurocentric nationalism and capitalism. We hate Hitler not just for killing Jews but for enslaving them – and yet the modern wealth of Britain, France and America is very easily traceable to that which was created by slavery and colonial exploitation. Libertarians may look back on the 18th and 19th centuries as heroic age of robust free markets, but where capitalist markets did not exist they were coerced onto millions of black and brown people across the world by violent, state-backed force. That so many would prefer to view Britain’s looting of Africa and India as somehow superior to Hitler’s looting of Jewish homes, black homes, the homes of the countries he invaded, contains a nasty little ommission: that Hitler did what his neighbors had been getting away with for a while. And we didn’t give that money back.
The baby Hitler argument assumes that the holocaust was not a social problem and that with the great dictator now removed, it is behind us. I wonder if Jeb Bush would tell that to the Urwintore Theatre Company. This is a Rwandan troupe that performed The Investigation, Peter Weiss’s play about the Frankfurt war crimes trial, where the guards of Auschwitz were tried. The focus of the play is not some trite ‘never again’ message that seeks to comfort audiences that the worst is over. One witness in the play – whose dialog copied from the trials themselves quite liberally – says instead that “the society that produced the camps is our society”, and as a specifically Rwandan cast, they do not have such ease in forgetting their genocide as we do.
Killing baby Hitler forgets also that Hitler had more than a few supporters in the supposedly ‘balanced’ countries of the West who didn’t capitulate to the inexorable groundswell of early 20th century fascism. At home he had the backing of much of Weimar Germany’s capitalist class, and indeed the rate of return on private investment under Nazism skyrocketed between 1932 and 1940 – in other words, there were wealthy interests who very much understood the benefits Nazism could reap for them, until it was too late. Outside of Germany, meanwhile, Hitler enjoyed significant support from antisemites in the French establishment (which was already very well prepared for the racial violence they enacted in the Vichy regime). There was an American coup plotted in 1934 by top Wall Street representatives of the Rockefellers and J.P. Morgan.
The British establishment was especially sympathetic to fascism. Numerous elements of Britain’s royal family were sympathetic to the cause, leading to an interesting scandal recently which revealed that as a young girl Queen Elizabeth was photographed being taught a Nazi salute by her mother and uncle. Winston Churchill gave glowing praise for the nationalist spirit and anti-communism of both Hitler and Mussolini. And, most famously, Lord Rothermere, the owner of the Daily Mail, was a friend of Hitler’s and placed his full support behind Hitler and his British street-level counterpart Oswald Mosley in the years leading up to the War.
Clearly the desire to kill baby Hitler ignores the systemic crisis of Europe and America of the time. It ignores the fact that the Nazis weren’t simply imposed on the world by one nasty guy, but were eased into power by ancient racist undercurrents and scheming elites who thought they could control him. That we don’t also fetishize baby Franco, baby Mussolini, says enough to make it suspect – is it because they weren’t as awful as Hitler, and therefore weren’t as sufficiently different from us? Let’s kill baby Pinochet! Oh wait, who put him there?
This is the crucial problem with baby Hitler, the real horror that wanting to kill him tries to disguise: that it could have been anyone. It’s technically correct that, Back to The Future style, killing baby Hitler would change the course of history. It just wouldn’t change the underlying currents – yes, we might end that particular iteration of that particular holocaust, but it’s unlikely that killing one baby would significantly (or at all) shift the balance of power in favor of better forces. Baby Hitler is Agent Smith – the forces of history and hatred and inequality and violence can elevate any random psychopath in the street to arch-villain status, and no matter if you kill one, the system that he infects is still very much alive and will spit him out once more.
To put this another way, would you go back in time and kill baby Dylan Roof? According to a CNN poll, a smaller proportion of white Americans think the adult Charleston shooter should be charged with terrorism than think the infant Hitler should be killed. That is the sound of a system excusing itself for a crime it does not feel it is in any real way connected to. To be clear: I would not go back in time and kill baby Dylan Roof. I would go back in time and change the conditions that he was raised in, that led him to become such a monster. I would change the way he grew up to feel threatened by black men; I would change the social or family setting that gave him that sad sense of entitlement, which made him feel like he had patriarchal ownership over white women. He is not just something horrible, he is the culmination, the inevitable result, of several things horrible. The truth is there is no time machine that can give us absolution for the crimes of white supremacy.
This is what Jeb really wants: to be the hero who saves ethnic minorities and has their gratitude rather than their suspicion. We can ignore for now the fact that the murderers of children have already been conflated with defenders of the nation in recent American history, or that the Bush family really ought to know by now that killing children in faraway lands tends to cause rather than solve national security problems. His denial that matters most on this issue is that every nasty person in a position of power who ever harmed anyone else got their power with the consent of other people – and this is why Baby Hitler 2.0 has probably already been born.
Of course, we won’t know who he is yet, it’s impossible. We can’t, King Herod style, find and kill this baby by eliminating a threatening demographic. For all we know, this baby is currently quite nice, and has nice parents. But if it is an American or European baby, we know certain things to be already true: that the conditions which allowed a whole host of authoritarian dictatorships to rise in the early 20th century are coming about again. States are increasingly authoritarian. Inequality and insecurity are on the rise. Western nations with a penchant for racial violence are returning to old habits where Muslims are being openly talked about now in the same way as Jews were a century earlier. Far right parties are cashing in on this fear and loathing in the same way as before. Our insane financial system and the austerity it has imposed on us is exacerbating all of these conditions.
So if Baby Hitler 2.0 has already been born, it is worthless to hunt him down, but it is perhaps instructive to identify his midwife. Who is it that is creating grounds for inequality and suffering? Who is encouraging ordinary but increasingly impoverished people to scapegoat minorities? Who is increasing surveillance and the tools and powers of state violence? Wouldn’t that quite neatly describe the machinations and impact of the Republican and Democrat parties of recent years?
If Baby Hitler 2.0 is to be killed, our task is to hope that he still has time to become something other than Hitler 2.0. By looking at the ground already being made fertile for fascism and turning it into something better, before it’s too late. As Jeb said: you gotta step up, man.
Cover image: John Pemble
Recently I did an opening interview for the On Demand Conference here in New York. As part of the conversation we discussed intermediary liability for platforms and I suggested that complete transparency was a good trade for intermediary liability. Someone from the audience became quite upset and asked, “so you mean Uber should have no liability for a rape by one of their drivers?” I was reminded of that moment when I read the medium post about “Living and Dying on Airbnb.” So let me explain my position further.
Right now platforms have intermediary liability. So for instance someone who suffers injury in an apartment found on Airbnb could sue Airbnb and not just the owner of the apartment. The same is true for Uber and also marketplaces such as Etsy. Fear of such law suits leads to the paradoxical situation that platforms withhold a lot of information that relates to safety which in turn means there are calls for stricter regulations.
Consider for example the situation of a rental on Airbnb. The government could decide to require an inspection for safety violations of every place that is rented out. Once you start to go down this route two things happen. First, it severely restricts the reach of a network or marketplace (which is on the margin bad for all participants) and second, it means a lot of further regulation is required which specifies what kind of safety inspection, the frequency with which it has to take place and so forth. In return there is a strong incentive to then conduct the minimum effort possible to clear whatever bar the government has set.
Now consider the opposite approach. The government offers Airbnb an immunity from intermediary liability in return for much more transparency. For instance the requirement instead could become a need to disclose whether anyone has stayed at a location before or not (not just whether someone has reviewed it) along with offering a field where hosts can display the results of any voluntary safety inspection they have conducted (or none if they have not done one). Now a potential renter has significantly more information available. Maybe I don’t want to be the person staying at a place that no one else has ever been to and that doesn’t have a safety inspection.
If in addition to this Airbnb would be required to disclose safety statistics and make rentals fully searchable (to allow enforcement of local or even building level regulations). Then it seems fair that there should be no intermediary liability. A similar fundamental difference in approaches is possible for every network that operates on the Internet. You can either try to regulate the network’s behavior in detail or you can regulate for increased transparency.
Providing networks with a relief from intermediary liability is not at all unprecedented. For instance, we decided long ago to do this for the telephone network. Imagine how little progress we would have made with communications if the government had imposed strong liability on the network operators for every crime committed using the telephone network. Or imagine the same for the Internet at large.
We want more networks/platforms not fewer. These are a good thing with immense benefit for all participants as they can result in much better sharing and usage of existing resources. With my proposal though these companies themselves though will have to make a decision. Either they become much more transparent and offer real choice to participants or they should be as liable as a traditional (non-networked) provider. For instance, Uber would have to pick more clearly between being a platform/network or a transportation company.
- Title: The origin and evolution of r- and s- process elements in the Milky Way stellar disk
- Authors: Chiara Battistini and Thomas Bensby
- First Author’s Institution: Center for Astronomy, University of Heidelberg, Germany
- Paper Status: Accepted for publication in Astronomy & Astrophysics
As Carl Sagan famously put it, “We are made of star-stuff”. All the elements in our bodies heavier than lithium were fused by stars that died, enriching the material that the Earth and Sun were born from. But what is less well-known is that not all of these elements came from the same places, or were produced at the same time. Some elements (like carbon and oxygen) can be fused in the interiors of stars during their regular lives. But many heavy elements are produced only in the dying stages of a star, such as an explosive supernova (SN) or the winds from an Asymptotic Giant Branch (AGB) star.
The authors of today’s paper are engaged in Galactic Archaeology: the process of uncovering the star formation history of a galaxy (here, the Milky Way) using stars which are left over from when the Milky Way was first forming. They use measurements of heavy element abundances to study the history of how and when the Milky Way became enriched in heavy elements. Specifically, they are concerned with elements formed through slow and fast neutron capture.
Neutron Capture Elements
Typical nuclear fusion (colliding two nuclei together to produce a larger one) is only able to produce elements up to iron. Elements more massive than iron are created through a mechanism called neutron capture, where a nucleus absorbs a number of stray neutrons, some of which beta-decay into protons. You’ve now created a heavier element! But what elements can be produced this way depends on how rapidly the neutrons are added onto the nucleus.
The “s-process” (or slow neutron capture) occurs when a nucleus only absorbs one neutron at a time. The s-process takes place in AGBs, low-mass stars which are in the process of dying and ejecting their outer envelopes. Since low-mass stars have very long lifetimes, AGBs should only begin to enrich their surroundings around a billion years after they are first formed.
The “r-process” (rapid neutron capture) involves a nucleus capturing many neutrons in a short time, compared to how quickly the neutrons decay into protons. There are several places where the r-process may occur, with one example being a Type II supernova (SNII) of a very massive star. Massive stars die quickly, so SNII should enrich their surroundings very soon after the first stars are formed.
The s- and r- processes are thought to produce many of the same heavy elements, like samarium (Sm) and zirconium (Zr). But, to a good approximation, Europium (Eu) is produced almost entirely by the r-process, while Barium (Ba) is primarily produced by the s-process. So by studying the relative amounts of Eu and Ba in old Milky Way stars, the authors hope to determine when SNII (r-process) or AGBs (s-process) were most responsible for enriching the Milky Way.
Measuring Element Abundances
The authors used high-resolution spectroscopy of 593 stars, which includes measured absorption lines for a large number of heavy elements. The depth of each absorption line can be used to measure how much of each element is in the star’s atmosphere. Figure 1 shows an example line for Zirconium (Zr). The data (in black) can be fit by a variety of model spectra (blue lines) which have different abundances of Zr. By finding the best fit line (red), they infer the abundance of Zr.
Aside on notation: Abundances are commonly measured relative to other elements (like Hydrogen), and compared to the Sun. Since there is a large range of abundances in different stars, astronomers report abundances using logarithms. The authors use the following notation: [X] = log10(X) – log10(X_sun). In other words, [X] is the logarithm of the value X relative to the Sun. A star with [Fe/H] = -1 has 1/10th the iron abundance that the Sun has, and one with [Fe/H] = -2 has 1/100th that of the Sun.
What Produced the Heavy Elements?
The authors use the ratio [Eu/Ba] as a proxy for the relative contribution of r-process and s-process enrichment to a star’s chemistry. They compare this ratio to the iron abundance: [Fe/H]. Since iron is produced in all enrichment processes, [Fe/H] acts like an enrichment stopwatch: stars with low [Fe/H] were created early, before the Milky Way had been highly enriched. Those with higher [Fe/H] were formed later, when the Milky Way was already enriched.
Figure 2 shows the [Eu/Ba] ratio as a function of the metallicity ([Fe/H]) of each star. The “pure r-process” line shows where the [Eu/Ba] ratio would be if there was no s-process enrichment. The figure clearly shows that stars which formed earlier (with lower [Fe/H]) were primarily enriched by the r-process. They have higher [Eu/Ba] ratios, close to what would be predicted if the r-process were all that contributed. Younger stars (those with higher iron abundances) have lower values of [Eu/Ba], suggesting that the s-process became more significant later on.
The authors conclude that the r-process was responsible for enriching the Milky Way at early times, while the s-process kicked in later. This agrees with the expected picture of where the r- and s- processes are active. Type II supernovae (a possible source of r-process elements like Eu) will begin to enrich the Milky Way shortly after the first generations of stars are born. AGBs will only start to produce s-process elements after hundreds of millions of years, when the first generations of low-mass stars die off.
The true ages of these stars are difficult to measure with spectroscopy or photometry alone. If we had more precise ages for these stars, we could pinpoint the actual time when the r- and s- processes started enriching the Milky Way. By measuring more accurate ages of these stars with gyrochronolgy, future Galactic Archaeologists will soon be able to say much more about the history of our Milky Way, when the first stars were born, and when they first started enriching the galaxy with the ingredients of life.
For my first post on results from the Division for Planetary Sciences meeting, I'm going to tell you about Pluto's small moons: Styx, Nix, Kerberos, and Hydra, their bright colors and wacky rotation states.
In the The Martian, NASA astronaut Mark Watney is stranded on Mars. At a critical moment, China offers to help the U.S. bring him back to Earth. But can these two countries cooperate to explore space in reality?
In his new book, “Unstoppable: Harnessing Science to Change the World,” Planetary Society CEO Bill Nye addresses a “New Greatest Generation” -- today’s young leaders who embrace science and optimism for a viable future.
November 10, 2015
Lunch. £3.50. The Pen Factory, 13 Hope St, Liverpool L1 9BQ. Phone: 0151 709 7887. Website.
Film If there was a spiritual venue for my cinematic awakening it was the Hyde Park Picture House in Leeds. From the moment I discovered its existence after finding a brochure for the Leeds film festival in fresher's week, it's pretty much were I spent a lot of evenings, especially in my second and third year when it was a five or ten minute walk from my accommodation. In the days before even dial-up internet was a domestic necessity, this was as close as I got to having Netflix and of course entirely superior because although the selection was rather more limited and you couldn't pause for a toilet break, it was projected on a giant screen instead of the portable television I have at the moment.
But even my first year in halls, my romance with the place was such that when it came time to make a documentary for a presentations skills montage, I decided to put together a ten minute piece about the place which included an interview with the projectionist and the two montage sequences embedded above which will give you a retrospective sense of what it was like back then. As you can see it was a rep cinema in the old style with wooden theatre seating, screen sitting atop a stage, an orange clock with a backlight which was always on and an actual balcony. The gloss paint. All of the gloss paint.
There's plenty of advertising on show in that video, so you can see the filmic era this was. Flyers advertising The Scent of Green Papaya. A box office papered with publicity stills for Germinal. A late night showing of Reservoir Dogs which had to be granted a video licence. Poster for an all-nighter featuring Wayne's Worlds 1 & 2 and the two Bill & Ted films. Age of Innocence and Romeo is Bleeding in general release. Trailer for An Innocent Woman. While I'm never a completely believer that cinema has golden ages, every years has its classics and not, you can't really argue that this wasn't a great time to be a cinema goer.
Although Dogs was a weekly permanent fixture for a while, in later years the menu had more variety and it's in my third year I saw Mean Streets on Saturday night in a show which finished at about one am from a print which looked like it had been knocking around since its original release. One of those films which feels like it couldn't live up to its initial viewing, I like that it now exists as a series of dreams, flashes of images, notably the famous scene of drunk Keitel with an Aeroflex camera hung around his neck, De Nero's hat, the blueness and rain in the darkened streets, the pervading sense of dread and threat.
Here's a list of some of the films I remember seeing there: Trainspotting. Small Faces. Four Weddings. Leon. The Piano. True Romance. Withnail & I. Kids. Flirting With Disaster. Like Water For Chocolate. Farewell My Concubine. The Hudsucker Proxy. Pulp Fiction. Dazed and Confused. In The Bleak Midwinter. Manhattan Murder Mystery. Flower of my Secret. La Ceremonie. Mina Tannenbaum. Age of Innocence. The Last Seduction. Living in Oblivion. Mute Witness. Clerks. Smoke. Blue In The Face. The Brothers McMullen. Four Rooms. From Dusk Till Dawn.
The last film was John Sayles's Lone Star on the night of my graduation. My parents had gone to bed at the hotel and I ventured back to the Hyde Park after a few months away. For most people I expect it was the ceremony itself which drew a line under their undergraduate experience, but for me it was sitting on that empty balcony in that auditorium for what I knew to be final time. I cried. A lot. Just as I always do when I know that a feeling or place which was once there is gone. Brilliantly, the venue still exists although market forces mean that there's less variety on offer to new students. But one day I shall go back. Yes, one day.
Lots of government services require their users to report when things in their life or an organisation change.
This places a lot of responsibility on the user - they need a good mental model of the service to know what to report, when they should do it and how. It also generates a need for lots of secondary transactions and services: update this, report that, change this, re-apply for that.
The 'digital assistant' approach to designing public services could start to make things simpler and reduce the number of 'Report an X to Y' style government transactions.
After all, if services understand your past circumstances, why can't they use those circumstances to ask you the right questions?
Here are 7 possible* patterns (there are probably many more).
1) Recurring change
Some circumstances need updating on a regular basis (things like monthly childcare costs). The 'recurring change' pattern notifies users (via push alerts or sms) that they need to provide some information. The service should be smart enough to know the optimal number of days to ask this before any deadlines.
2) Future confirm
If a user reports a temporary change of state, for example they are going on holiday or taking their car off the road, the service should be able ask the user if that state has passed.
3) Date determined confirm
Similar to 'future confirm' there are some circumstances that the service should be able to determine from information it already holds, for example if it knows the user has a child of a certain age.
4) Recurring confirm
A 'dead man's handle' style confirmation, so the user has to actively confirm: "does your cafe still have 12 tables on the pavement outside your business?".
5) Recurring ignore-to-confirm
As above, but inaction is taken as confirmation.
6) Random change
Ask a user to submit new information on a subject at random intervals to help keep their data up-to-date.
7) Cascading updates
Sometimes the service will be able to determine if a change in a particular circumstance is likely to have caused a change in a related circumstance. For example, registering for a particular license or moving premises may ask the user to confirm information relevant to a related tax.
- No, they have not been tested, but then you can't build what you can't think of in the first place.
Six years ago I wrote three tips on making your board effective (tip #1, tip #2 and tip #3). Since then there is a lesson that I keep on relearning and so I am writing this tip #4 as much as a reminder to myself as anything: your board cannot be effective if it only talks to you, the CEO. Instead, you should make sure that members of your board also have relationships with execs on your senior team.
Now I have a feeling that quite a few VCs/board members and also CEOs will disagree with this. On a number of occasions I have heard this approach characterized as “going around the CEO” or as “undermining the CEO.” Let me be super clear here: if a CEO can be undermined simply by talking directly to someone on his or her senior team, then there is already a massive problem. In fact, I am now using this as an indicator the other way round. I try to stay away from backing founders who are not comfortable with me talking to some or all of the members of their senior team directly.
Why do I feel so strongly about this? Because you cannot help in a situation where you don’t know enough. Almost all the mistakes that I have made as a board member can be traced back to not having enough information about a critical issue. Just like doctors are likely to make the wrong diagnosis if presented with none or only some but not all symptoms, so board members cannot help guide to the right decision if they are somewhat or completely in the dark about what is actually going on in the company.
But how can you be in the dark if you are being presented with lots of information even by team members directly in board meetings? Because it is super hard for people to say things in a board meeting that aren even remotely controversial inside the executive team. These could be disagreements among senior team members about strategy, or implementation specifics or even about the performance of the CEO. Information on these difficult topics only really comes out in one on one conversations with senior team members.
Now of course there is a way to do this wrong as a board member. You could in such a one-on-one meeting say things that would effectively undermine the CEO. This is no different than in any other “skip level” meeting, e.g. the CEO meeting with someone lower down in the organization. In such meetings one has to be a listener first and foremost and if asked for a comment one has to acknowledge what one has heard but abstain from making a judgment.
I will write a follow up post to this that talks more about how I have come to understand the ultimate role of a great board member: helping a founder understand his or her weaknesses and how those are reflected in the organization. There is little chance of that happening without getting feedback directly from senior team members.
On October 28th, the Cassini spacecraft flew through the geyser plume of Saturn's moon Enceladus. But Cassini was not the only spacecraft operating in the solar system that day.
November 09, 2015
Lunch. £3.50. Street Café, Everyman Theatre, 5-11 Hope Street, Liverpool, L1 9BH. Tel: 0151 708 3700. Website.
Occasionally hope triumphs over experience and I expose myself to another face-punch, like the one I got this morning when I learned that the triangular machine tool development had been rejected for the Innovation Vouchers Round 13.
This is lowest, most easy grant money to go for — according to everyone. You don’t even see the £5000 when you win it because it’s only paid to fit and proper people such as patent attorneys and college professors. If you can’t win one of these then you’re such a loser you don’t need to waste everyone’s time applying for anything more substantial. And boy have I wasted people’s time (including my own) in the past over other rejected projects.
Innovate UK (formally Technology Strategy Board) has only one job, which is to distribute £400million per year on the basis that:
“We work to:
* determine which science and technology developments will drive future economic growth
* meet UK innovators with great ideas in the fields we’re focused on
* fund the strongest opportunities
* connect innovators with the right partners they need to succeed
* help our innovators launch, build and grow successful businesses
You’ll note that this list does not include or imply that:
Our ambition is to acquire the most beautifully crafted set of grant applications in the world to provide a lining to the filing cabinets in our office.
You’d think it’d be obvious that a written application merely needs to meet a certain standard of expressiveness to merit being judged entirely on its content. It either passes that standard or it doesn’t, and I won’t be told that I failed because I simply didn’t sweat enough over the 1000 characters alloted on the form.
Nevertheless, here’s the case I wrote down for obtaining an innovation voucher which I intended on spending at the AMRC:
We have a CNC milling machine based on a triangular geometry that is capable of cutting steel and can be built from parts a tenth of the price of a conventional machine with the same capabilities. Several full scale prototypes have been built and have proven capable. Unfortunately neither of the inventors are trained mechanical engineers and we require expert advice on the design as well as an independent assessment of its potential.
Luckily, in these enlightened days, InnovateUK actually publishes where the grants are going, so I don’t have to file FOI requests as I once used to. The 60 winners of the previous Round 12 have been disclosed.
Here’s one of them, which is presumably intending to drive future feline growth:
To Design and build a fully automated Cat feeder that can support up to eight cats for up to 4 days or the length of a bank holiday weekend.
The feeder will also make use of modern connectivity technologies so as not to break the link between owner and cats but also allow additional functional options such as play and monitoring.
The Modular design will allow for customization for owners offering greater flexibility and increasing the consumer market this product is aimed for.
New trendy Fashion line for everyone. Pick’n’Mix your t-shirt to make it suitable for you
Honesty Foods develops a liquid tea bag which contains sufficient protein and fibre to take the edge off your appetite and therefore help reduce snacking.
With the support of Innovate UK, CookieSmart is developing an exciting new Internet of Things (IoT) application for kids!
PitchForMe plans to build a platform which gives insight into the real person prior to meeting, from which employers can make informed hiring decisions. An employer would be able to fully understand a candidate’s qualities. Through a cloud based dashboard solution they will be able to find people who have aligned values and truly fit their company. Also included would be personality traits, video pitching, and work histories. Personality profiling would be available which determines strengths, how an employee works within a team, and areas for development.
I’m not making these up. This one is pretty timely:
The external expert will be able to help in defining the technological feasibility if(sic) developing a Human presence detector and linking it to an app that will alert the drive/logistics provider to the presence of human entity located within the trailer of the HGV.
This one is plainly the text of a job ad, and not an answer to a question posed in the application form. Yet somehow it got accepted:
We are looking for a talented and experienced hybrid mobile app developer (minimum 5 years experience) who has a passion for innovation, to join our team on a 3 month contract. You will be providing guidance, advice and technical leadership on a senior level to launch the re-build of our existing iOS app. Your superior technical skills will contribute to the future of the company and to the success of Must. The successful candidate will have extensive previous experience working with PhoneGap or Ionic. The successful candidate will also have experience in the build and deployment of a medium to large scale application capable of horizontal scaling. Candidates must already have the right to work in the UK without sponsorship.
Perhaps machine tools are just too passe and boring. Well, how about this:
An expert external CNC programmer will be able to create a programme for us to use on the machine. We are hoping to develop a programme that will enable us to create different extending tables of bespoke sizes just by inputting the dimensions for each table. This will enable us to create tables of an extremely high quality with precise dimensions.
Sure, it’s a great thing to do, but is it innovative?
If schools were places of learning, then this one should easily be within the capabilities of a school kid as part of an end of year project. I don’t see anyone putting such resources and needs together.
None of this makes me feel any better. In fact it makes me feel worse. Once again I’m a failure. Go suck on it.
Now that I have wallowed long enough in this particular punch in the face, you don’t need to know how much of this blogpost has already been deleted. It’s pretty lucky I don’t have any rent to pay except to the hackspace, or I’d be homeless at this rate.
Reporting from the 47th annual Division for Planetary Sciences Meeting, DPS15 by The Planetary Society
I'll be reporting all week from Washington, D.C. from the 47th annual meeting of the Division for Planetary Sciences of the American Astronomical Society. Expect lots of news from New Horizons, Dawn, Cassini, MAVEN, WISE, and Rosetta missions, not to mention ground-based telescopes, plus a variety of other sources.
November 07, 2015
TV Peter Fucking Capaldi. With a title like The Zygon Inversion, Peter Harness’s The Zygon Inversion seemed like an unlikely moment for the current incumbent to give what’s arguably his greatest piece of acting for the series, given the preceding episode’s action and adventure whizziness and the trailers and preview clips which suggested this would be companion orientated instalment. Yet here we all are, and I can tell it’s we all because I’ve looked at Twitter making its hyperbolic statements about how this might well the best performance an actor in the central role has given and even that he’s the best Doctor ever.
Clearly I’m going to indicate that we shouldn’t be too hasty and that almost every actor who's wandered through the TARDIS has been labelled as such at some point and that not enough people have heard the McGann audios to really properly judge, but I will say that it this is one of those moment we fans live for and crave, why we’ll sit through the rot which might otherwise be released under this title across the various media, why we’ll even watch whole seasons worth of episodes which rub us up the wrong way full of episodes we dislike intensely. This. The synergy between an actor at the top of his game and a script feeding him poetry.
Typically, you’d spend a review working up towards talking about this sort of climax but let’s just bask for a moment. This near closing scene is ten minutes long, beginning just over half an hour into the episode so as predicted by many last week, as with most two-parters in the revival, after a zippy opening instalment, everything slows, the shot length increases and the implications of the given set up are played out, the writer’s thoughts and fingers stroking the textures of his idea. It’s the same pattern we saw in World War Three and The Forest of the Dead and Flesh and Stone.
After making some fairly direct comparisons with the current situation in the middle east last week, this episode and the Doctor’s speech in particular, fittingly given the weekend, broadens out to encompass a wider discussion of the implications of war. In its simplest terms, even in ideological circumstances, its tit for tat and requires the two sides to decide that actually they’d be better off not killing each other for a change no matter the original apparent causes and whatnot. This isn’t mutually assured destruction. This is deciding not to have weapons in the first place because you don’t need to use them. Jeremy Corbyn would be pleased with the central message of the episode.
Hopelessly optimistic, to be sure and expects the more irrational side to lay down their arms, usually in conflicts were both parties believe the other to be the irrational one. Plus even if the two of you decide not blow each other up, there’ll always tend to be an opportunistic third party who thinks you’re both mad and you’ll be exploded anyway. In this way, the Doctor’s essentially Fenchurch in the Hitchhiker’s Guide, “the girl sitting on her own in a small cafe in Rickmansworth (who) suddenly realized what it was that had been going wrong all this time, and she finally knew how the world could be made a good and happy place.” Then boom.
In other words, this philosophical Milgram experiment might work against a warmongering Zygon but probably won’t against empire-builders which is presumably why the Doctor hasn’t ever attempted this tactic with the Daleks and decided Hitler was best left locked in a closet. But like I said, bless the writers and notice its Harness and Steven Moffat who have their names on this script, for at least putting the words in the Doctor’s mouth and then showing us the potential results if they were enacted. If nothing else it might halt the escalation of the war of the buttons in schools or which channel to watch on a Saturday night.
Peter Fucking Capaldi. Part of the job of an actor is to show the results of a past which they themselves didn’t experience but in Doctor Who the challenge is somewhat greater because the audience has often watched another actor experience them. In here, we can see the faces of all three of the chaps who stood around the moment in the eyes of Capaldi and once again in this series, in a way which simply wasn’t the case last year, we can entirely believe that the man emotionally gesticulating here is the same one who didn’t press that particular big red button but then believed he did and then experienced the guilt of destroying his own race.
But when Capaldi’s handed the Bafta for the series, let’s not forget there are other people in this scene. Jemma Redgrave feels sidelined but really she’s held back in order to give the moment when Kate closes the box resonance and herald the closing beats of the piece. When she apologises, Redgrave says it in full knowledge of its import; her character’s father never apologised for anything or cared all that much about disappointing the Doctor. If the story has a legacy for her character, it’s in once more delineating her from her heritage, to show that she’s not simply fulfilling a narrative function carried over from earlier series.
Mainly it’s Jenna Coleman who’s being ignored a bit when she has the difficult job of not only presenting someone who’s a subtly evil version of the character she already plays but also in a way which doesn’t gesture towards panto (in a way available to Annette Badland in Boomtown) but which still hints at the alien figure underneath. Next time you watch, notice how the realisation of what Bonnie is capable of and then what she’s actually capable of wash over her face as elements of the real Clara’s personality become more expressively recognisable at just the moment when the Doctor indicates that’s how he knows he’s getting through.
Some on the aforementioned social network have questioned why, despite all of this, the Doctor is perfectly fine for twenty million Zygons to be living secretly on Earth and for the government to be covering up the fact. But isn’t this his plan for the Silurians from Cold Blood in action? Hasn’t his approach always been that his quite happy for aliens and Terrans to co-exist peacefully only really forming the oncoming storm when one side, usually the former, decides they want the place for themselves? Back in The Unquiet Dead, before he discovered their true nature, he was quite happy for the Gelth to inhabit the deceased even against Rose’s objections.
Much of the rest of the episode is about getting the Doctor in that room in order to give this speech although this pan-national redo of The Android Invasion with Zygons offered some delights along the way notably as the Time Lord and Osgood fled the remains of the crash, even if neither of them stop to wonder about the fate of the other people on the plane (presumably the Doctor was able to save her because she was the only other person in proximity). The sinister police officers continues the Pertweean reverberations from last week though it’ll presumably be the downing of the plane itself being shown right no which’ll attract the ire of the BBC’s foes even though there’s not a lot that could have been done to edit the piece ala Robots of Sherwood.
Osgood’s alive! Of course she is. For all that happens in the McGann audios in relation to Zygons keeping their shape after their original dies, it was never entirely convincing that the human version died at Missy’s hand. But it’s good that even when the Doctor isn’t there, the show won’t confirm or deny which is the “real” one. As far as they’re concerned they’re both real. Expect a jaw dropping moment in the future when the Bonnie version pops back into her Zygon shape signalling that the human version has gone. Perhaps it’ll be Missy who does the deed, wanting to finish the job she thought she’d already finished.
It's also important to notice that this is only the character's third proper television appearance and yet she feels just as much a part of the series as the Paternoster Gang or River or Bernice or any of the Doctor's other friends. Ingrid Oliver's performance has also changed across the three episodes from somewhat giddy, but strong, fan girl in The Day of the Doctor to the slightly reserved expert who appears here. On a textual level, I've no idea what the TARDIS business was about but it feels like it's supposed to have significance in the way that random bits of dialogue often do. Perhaps its just a throw away, perhaps it has greater import. Speaking of which ...
What of Clara? Four episodes to go and she’s still around. What did the Doctor mean when he says it was the longest month of his life? As I pondered the other week, has the Doctor already seen her death somehow, in The Magician’s Apprentice even and the version he’s currently travelling with is from earlier in her timeline? At some point will he drop her off, much as he did with River Song, knowing full well it’ll be for the last time as the younger version of himself, the one he already remembers, takes her into his TARDIS and to her certain death? Will there be other pieces of dialogue from earlier in the season with some extraordinary double meaning?
Either way, Harness and Moffat have almost managed to make up for the whole Kill The Moon debarkle, the latter giving every indication that he saw how he’d nudged the character in an unpleasant direction and remembered what it is why like about him. More and more, season eight feels like an aberration, a storytelling experiment gone wrong on a similar scale to the Divergent Universe arc in the McGann audios and the whole of the Sixth Doctor television tenure, which is something I think I may have said before but in the wake of The Zygon Inversion feels like it needs repeating. If only more people were still watching it on broadcast.
Politics After retweeting something which she's since deleted, a contextless tweet that said, "Not quite how I'd put it, but some interesting facts" to which I retorted:
This is how I view most things. https://t.co/wJc0F9rRTT— Stuart Ian Burns (@feelinglistless) November 7, 2015
November is shaping up to be a busy month for Orion and Space Launch System hardware. A human-rated flight engine is in the test stand at Stennis Space Center, and a version of Orion's service module is getting ready to cross the Atlantic.
November 06, 2015
I'm going to ignore bacterial and protozoal parasites for now, because my Toxoplasma gondii master tells me they're of no interest. And we all know about the mundane visible-with-the-naked-eye ones like tapeworms and pinworms, and the horrifyingly nasty Guinea worm that causes Dracunculiasis. But these are relatively straightforward parasites. Booooring.
However, parasites in general are fascinating and some of their variants are just totally bugfuck. Literally. Take Ophiocordyceps unilateralis, for example. It's a fungal parasite. Its spores hatch and brain-control its host—forest floor dwelling ants—makes them climb high up in the vegetation canopy, then digests its host's body and forms fruiting bodies to scatter new spores over the forest floor for more unfortunate ants to stumble across.
Or take the strange case of the hyperparasitoid wasps—wasps that lay their eggs in wasps that parasitize caterpillars: small cabbage white butterfly caterpillars are used as a host species by the parasitoid wasps—Cotesia rubecula and Cotesia glomerata—which in turn make a handly meal for the even smaller hyperparasitoid wasp Lysibia nana. (Thereby confirming something I've known since I was two years old: wasps are assholes.)
And then things get weird, because these are instances of parasitism where the parasitic life cycle is proceeding according to the script. When parasites go wrong, things get weird-ugly, really fast. Tapeworms sometimes run amok and end up encysting in the brains of their hosts. And then there was the recent case where an HIV-positive man contracted and died of cancer from a non-human source, namely an invasive tapeworm's own neoplasm (which became highly invasive in the immunocompromised host).
Anyway, I thought I should share the joy with you because I'm currently inventing semi-plausible parasitic and meta-parasitic lifecycles with humans as the basic host, for a future Laundry Files story (because Equoids are really just a little passé). And I thought you might like to share some of your favourite parasites and hyperparasites with me! (Nothing well-known or mundane, of course. Assume I'm already familiar with the common stuff.)
Anyone want to start?
- profiling and the development of an ecosystem around this.
- personal reputation and the development of a relatively novel system that combines components of both trust and profile.
You can't also just say, well that company should be asking - "how does this process help us?" - as the executive don't understand the inner workings of systems and the people involved in the process believe it's the right way. They're proud of their environment and have all the usual symbols of inertia. It's not that they're daft but instead they believe that what they're doing is right - it's what they've always done and they've invested considerable personal time and effort making it better. Of course, making it more efficient would help the company hence investing in robotic automation sounds "sensible". I can easily build a case for this as daft as it sounds (and it is daft).
Showing them a map, even with just pointing out the change if you start using more commodity components such as standard racks (see figure 3) rather than focusing on making an existing process efficient exposes the assumptions and workings of a company.
Once you've explained that, it becomes far easier for people to make the leap towards not having racks at all (i.e. flow 2). I mention this because people often tell me about the importance of removing bottlenecks and ensuring efficient flow. I happen to think such techniques for value stream analysis are reasonably important but you have to be really careful that you're not making the wrong thing efficient. It's not enough to just look at flow (as in figure 1), you need to consider how evolved the components are (e.g. fig 2 and 3).
Which is why I always recommend you start with a map.
I feel a bit ridiculous posting this now after it’s been sitting in draft form for months, considering one of the main points was about how rapidly things can move around the digital and physical forms, but I was prompted by this post by SOSO and Print All Over Me to dig it back out.
I’ve previous written about the “Deep Dream” T-Shirt in The Google AI Neural Network T-Shirt here on Medium, where it took just 4 days from reading about Google’s Deep Dream to installing the software, running it and having the T-shirt printed by SubLab in my hands. I commented there that; “If SubLab had Google’s Neural Network and a shop-front you could probably walk in, say “bee hives” or “honeybadger” and walk out with a one-off custom computer AI generated t-shirt within a couple of hours. This probably already happens somewhere.”
The probably happening somewhere part is now true(er), although the turn around is a less instant 2–3 weeks. With PixelWeaver by SOSO you can indeed type in “bee hives”, “honeybadger” or as shown here “sunrise” and get yourself a custom item of clothing.
You can also use the playfully interactive app written by artist LIA on the Print All Over Me site to create unique clothing generated from her code and your mouse movements. With the promise of being able to upload your own code for other people to play with coming soon. Think codepen.io combined with fashion.
I have some thoughts about this, which is why we’re here, but first back to the GeoCities T-Shirt.
The T-Shirt is based on the wonderful http://www.cameronsworld.net/ website built with artefacts excavated from archived GeoCities pages. In Cameron’s own words…
“…a tribute to the lost days of unrefined self-expression on the Internet. This project recalls the visual aesthetics from an era when it was expected that personal spaces would always be under construction.”
This time the ever excellent SubLab (the only place I know of based in the UK cool enough to handle all over prints) had the T-Shirt back in my hands in 3 days from me first setting eyes on Cameron’s the website. Day 1 was snagging Cameron’s website then fiddling around with the design, chopping chunks out & moving elements about to make it more T-Shirt shaped before sending it off. Day 2 was waiting while SubLab printed, Day 3 was it arriving back in the post.
This is the part where I try to tie together the nature of the early web as exemplified by Cameron’s GeoCites art project with rapid custom textile printing, and why I wanted to put the former onto the latter.
Cameron sums it up nicely with “the lost days of unrefined self-expression on the Internet”, with GeoCities & MySpace and FrontPage (for those who were more advanced) people enjoyed building their own spaces on the internet without a design care in the world. Some of this sentiment is captured in Anil Dash’s excellent essay “The Web We Lost” from 2012…
“In the early days of the social web, there was a broad expectation that regular people might own their own identities by having their own websites, instead of being dependent on a few big sites to host their online identity. In this vision, you would own your own domain name and have complete control over its contents, rather than having a handle tacked on to the end of a huge company’s site.”
(replace “websites” and similar with clothes & style).
Over time things have gotten a little more refined aesthetically through the medium of hosting your own words on, well, Medium, where beautiful identical design gives equal weight to some idiot’s thoughts about what the Ruby community should do with Codes of Conduct vs a well thought out missive on the importance of putting an animated Christmas gif onto a t-shirt.
Or identikit tumblrs, PinBoard, or freedom from Facebook newsletters where you actually reach your readers.
In short the crazy wild web of the past is now all pleasing fonts in soothing shades of not quite black [rgba(0,0,0,0.8)]. When people were first able to create their own websites armed with nothing more than Notepad or a textarea the results were often bold, brash, colourful and noisy, an attempt to be unique and individual…
“I Made This, this is me.”
…it what they psychedelically screamed loudly.
This is my hunch, this is my feels from watching custom print sites over the years from Zazzle, CafePress to Print All Over Me, knyttan/unmade and SubLab we are at the early MySpace & GeoCities point in customisable mass consumer fashion. When you can make an item of clothing completely your own, you want it to be noticed.
“I Made This, this is me.”
…in a callback to earlier, everybody’s (in a “Here Comes Everybody” sense) personal fashion spaces will always be under construction with wonderful unrefined self-expression.
At least for a few more years until things settle down, a calmer aesthetic takes over and people will define their tastes based on either current well known brands who “go digital” or whoever the future fashion equivalent of PinBoard, tumblr, Facebook and Medium are.
In the meantime I shall wait for clothes to get animated gifs and MIDI. And it will be glorious.
A summary of Jupiter's changing face as seen from Earth during its 2014/2015 apparition.
November 05, 2015
TV The BBC Store has finally launched there's plenty to visually engorge on if you have the funds. I'm personally inclined to wait until it's available on the TV version of the iPlayer app unless something really remarkable crops up that I'm desperate to see.
One of the criticisms, as has always been the case even when Doctor Who was being released on VHS, is why people should have to pay to see something they've already paid for through the license fee. The rule is that what we're actually paying for is the initial broadcast of the programme. Any ancillary version sometimes requires further payments to participants either when it's repeated or in future merchandising or cross-media releases.
In his Guardian article about the Store, Mark Lawson offers an alternative and equally valid philosophical explanation:
"To me, this argument seems akin to a home-owner refusing to pay the water rates on the basis that the liquid has almost certainly been through the house before. Can contributing a fraction of your licence fee to the cost of a programme at the time of its first transmission really be thought to have bought a right to view the show in perpetuity? And, even if that case could be made, what about products in which we were never personally a shareholder? My first purchases from Store – Potter’s Double Dare (1971) and an episode of the sitcom The Likely Lads (1963) – come from a time when I didn’t pay a licence fee. Can I really contend that, when my dad handed over the cash at the post office for his licence fee, he was securing unlimited viewing rights for his heirs?"It's never cross my mind that I wouldn't have to pay to see something again. Anything repeated on television is essentially a freebie.
If you have looked for your car recently in a big parking lot and had trouble finding it you can be forgiven. All modern cars look alike. In fact, if you photoshop the grille of one maker onto the car of another and it’s nearly impossible to tell the difference as this work by Jalopnik demonstrates
Yup. That’s a Kia with a BMW grille.
So why do I bring this up? Because yesterday Twitter announced that it was replacing the star icon with a heart for favoriting. This too seems to me driven by a desire to please a larger number of endusers. It is the same approach to design based on user feedback that has converged the modern cars to all look alike. So now we have this situation:
THIS IS NOT TUMBLR
And yes, that’s the bottom of a tweet, not a Tumblr post.
But isn’t it good to give endusers the same conventions everywhere? I am sure Twitter did a lot of testing which suggests that’s correct. And yet, just as with car design it winds up being a net loss for society by removing variance from the environment. Both the car design and Twitter’s choice are great example of Hotelling’s Law, which the prolific economist Harold Hotelling found in 1929: in many markets it is rational for competing producers to make their products more similar to each other even though this results in net loss to society.
For instance, I strongly preferred the star over the heart. I used it to mark tweets that I deem interesting and important. Many tweets that I deem interesting I don’t like and certainly don’t love. Quite often they are in fact tweets I disagree with but that are nonetheless important. The star is a value neutral highlight. The heart isn’t.
I will continue to use Twitter as will most other longtime users. On the margin Twitter will likely gain some new users from this move. And that’s exactly why Hotelling’s Law applies in so many situations. Still sucks.
Authors: Jonathan C. Tan, Sourav Chatterjee, Xiao Hu, Zhaohuan Zhu, Subhanjoy Mohanty
First author’s institution: Depts. of Astronomy & Physics, University of Florida, Gainesville, FL 32611, USA
Status: to appear in proceedings of XXIXth IAU GA Focus Meeting 1
If you asked an astronomer twenty years ago, what Kepler stands for, the answer would probably have sounded similar to: “What! Have you never heard about Johannes Kepler?” Nowadays, chances are quite high that the answer would be: “What! Have you never heard about the Kepler mission?”
The Kepler mission has become so important that it can compete in reputation with the founder of the laws of planetary motion! This astrobite has to do with both Keplers: it considers planetary motions as well as the findings from the Kepler satellite. (If you want to know more about the Kepler mission, you can find way more astrobites articles than I can list here; just search for “Kepler” on our site and your curiosity will be satisfied.) Kepler data indicates that planets of 1 to 10 Earth radii with orbital periods of less than 100 days are very common in the Milky Way. (For a nice back of the envelope estimate, please watch this talk by Eugene Chiang from minute 9:30 to minute 12.) There are different names used for this size range of planets such as super-Earths, mini-Neptunes and gas dwarfs — it’s an antonym to gas giants. I prefer the term gas dwarfs because it implies the main difference between the two: gas dwarfs have small gas atmospheres, while gas giants have large gas atmosphere. Classical models of planet formation were designed to explain one particular system, namely the solar system.However, the solar system appears to be unusual– it lacks lacks a gas dwarf and our innermost planet, Mercury, whose 88-day orbital period is similar to that of gas dwarfs, but is significantly less massive. As of this writing, there are two main scenarios for the explanation for the observed exoplanetary systems: 1) formation of planets at larger orbits and migration to smaller orbits later in their evolution, and 2) formation of planets at small distances, so called in situ formation.
Inside-out planet formation briefly introduced
The authors of today’s paper are fans of the latter explanation and they extend the idea by predicting that planets form inside-out. To better understand their idea, you need to know that planets form in disks of gas around the star, so called protoplanetary disks (and look at the figure above, which is figure 1 in the paper). These disks form as a consequence of infall of gas towards the young star and angular momentum conservation. First, the gas in the disk will move radially inwards towards the stars. (Since this region is supposed to be free of turbulence, astronomers refer to it as the dead zone). The more the gas approaches the star, the higher the temperatures are. As a consequence, the gas heats up and ionizes at a distance close to the star. As you might remember from electrodynamics, moving charges induce a magnetic field. The authors propose that such a magnetic field can yield to an instability, the so called magneto rotational instability (MRI), which can cause a local increase in pressure — a pressure bump. Okay, so far so good, but now comes the clincher. The disk does not only consist of gas, but also of dust grains, so called pebbles. So what do these pebbles do? Well, the gas slows down the pebbles (think aerodynamic drag), but once the pebbles reach the orbit of enhanced pressure, they cannot drift any further. The pebbles accumulate here and eventually form a planet that also sweeps up the available gas at this orbit.
Great! We have one planet, potentially a gas dwarf, but how can further planets form? A planet’s orbit is usually pretty empty, since the planet accreted the gas and dust surrounding it and you might assume that material falls into this empty region. However, this is not the case. Less light is absorbed in the empty planet-induced gap, which allows light from the star to push the gas and dust back. New pebbles accumulate at the distance, where the light of the star prevents them from falling in. In this way, another planet can form outside of the inner one, through accumulating dust and sweeping up gas. Since this planet causes another gap, yet another planet can form further out and generally, several planets can form in this sequence as long as enough gas and pebbles are available.
Not the final answer yet
To summarize, the planets form in a sequence — the first planet forms close to the star and the next planets form further out as the previous one. Hence the name: inside-out planet formation. Now you may think: “Wow! Does that mean that the riddle of planet formation is finally solved and researchers working in planet formation have to look for other jobs?” The short answer is: “No.” Clearly, the general idea of the authors is very nice and appealing, but the model is also based on a simplified version of the protoplanetary disk. For instance, it is not clear
- whether magnetic forces really can induce such pressure bumps,
- whether and how the dust grains can grow to planet size,
- whether the light from the star really is efficient enough to prevent material from falling in.
In fact, the first point is not necessarily required, since other physical processes may be able to build up a pressure bump, too. Altogether, the authors’ inside-out model is a nice alternative to classical planet formation scenarios assuming migration. It will be exciting to see which model requires less fine-tuning.
There have been several important pieces of news about European missions in the last month: Rosetta's fate has been determined; ExoMars Trace Gas Orbiter's launch is slightly delayed; and they have selected a landing site for the ExoMars rover.
An analysis by The Planetary Society shows that in the post-space shuttle era, NASA astronauts spend roughly 33 percent less time aboard the International Space Station than their Russian counterparts.
Mars Exploration Rovers Update: Opportunity Hits Winter Slopes at Marathon Valley by The Planetary Society
Opportunity hit the slopes of her seventh winter haven on the south side of Marathon Valley in October as the mission entered the 130th month of what was initially slated to be a 90-day tour.
November 04, 2015
Medicine Araveeti Ramayogaiah was an Indian paediatrician who years before the internet was passing the word about preventative medicine, especially in relation to disease, to poor areas by sending out postcards to the vulnerable and sick who couldn't afford private health care:
For almost 25 years, Ramayogaiah wrote and sent postcards to India’s poor, especially women, telling them about ways to prevent—rather than cure—diseases. The good doctor died in Hyderabad in September this year at the age of 65.
The inexpensive postcard was Ramayogaiah’s solution to private hospitals, which are typically inaccessible and unaffordable for many of India’s poor.
In all, he wrote around 36,000 postcards to patients, acquaintances and strangers—explaining basic habits like boiling water and washing hands, and how to prevent commonplace ailments like diarrhea.
This is by way of apologizing for the light blogging lately: I've been somewhat busy, because ...
Regular readers might have noticed occasional references to an ongoing project of mine which first got started under the working monicker "Merchant Princes: The Next Generation". It's a trilogy, which is to say a lump of prose about the size of a typical Neal Stephenson novel, or maybe "The Lord of the Rings", and rather than risk writing myself into a corner by letting book one escape into print before I'd lined up all my ducks for the ending of book three, I argued for writing the whole damn thing before first publication. This has good consequences and bad consequences. The good: not writing myself into a corner with giant plot holes locked in place by publication. The bad: everything takes much longer than expected—much longer.
Book one of the trilogy now known as "Empire Games", titled "Dark State", is provisionally scheduled to show up some time in Q1/2017. It will be followed by "Black Rain" and "Invisible Sun". It's set circa 2020 in the divergent future of the Merchant Princes universe, in the security state that evolved after the US president was assassinated in the White House with a stolen nuke in 2003 by narcoterrorists from a parallel universe. With the DHS tasked with protecting the USA from threats from all over time lines, America in this version of 2020 isn't a terribly happy (or liberal) place. And then they make contact with another time line—one that has developed its own nuclear weapons and paratime infrastructure, and sees words like "democracy" and "imperialism" in a different light ...
I've spent a chunk of the past two months head-down, redrafting and tidying up the second book, "Black Rain". But that on its own wouldn't be enough to keep me from blogging.
I've also spent a chunk of the past two months head-down, redrafting and tidying up the seventh Laundry Files novel, "The Nightmare Stacks". This is now in the production pipeline, and is due for publication by Ace on June 28th, 2016 in the US, and by Orbit on June 23rd, 2016 in the UK. You can pre-order it here in the UK/EU and here in the USA. (Note: there will be a UK ebook link in due course but it's not up yet. The US link goes to the Kindle edition; you can find the hardcover one mouse-click away.)
As the American cover copy explains:
After stumbling upon the algorithm that turned him and his fellow merchant bankers into vampires, Alex Schwartz was drafted by The Laundry, Britain's secret counter-occult agency that's humanity's first line of defense against the forces of darkness. Dependent on his new employers for his continued existence--as Alex has no stomach for predatory bloodsucking--he has little choice but to accept his new role as an operative-in-training.
Dispatched to Leeds, Alex's first assignment is to help assess the costs of renovating a 1950s Cold War bunker into The Laundry's new headquarters. Unfortunately, Leeds is Alex's hometown, and the thought of breaking the news to his parents that he's left banking for civil service, while hiding his undead condition, is causing more anxiety than learning how to live as a vampire secret agent preparing to confront multiple apocalypses.
Alex's only saving grace is Cassie Brewer, a drama student appearing in the local Goth Festival who is inexplicably attracted to him despite his awkward personality and massive amounts of sunblock.
But Cassie has secrets of her own--secrets that make Alex's night life behaviors seem positively normal...
As you probably figured out, this isn't a Bob novel (or a Mo novel), it's a Laundry novel. However I think it's just as much fun as the others; and we'll be going back to Bob's snarky viewpoint in book eight, "The Delirium Brief", which I'm planning for 2017.
By the time "Invisible Sun" and "The Delirium Brief" are in production, I will have written seven in-series books in a row. Moreover, the books immediately before (or interleaved with) these were also sequels. In fact, I haven't begun a totally clean-sheet novel length project since 2007, and my Muse is going a bit stir-crazy. Upshot: the next book will probably be something utterly different and, hopefully, fresh. But I'm not planning on abandoning either ongoing series—I just need to write something different once in a while.
Oh, and hopefully I'll have a little more time for blogging from now until the end of the year.
Title: Four hot DOGs in the microwave
Author: S. Frey, Z. Paragi, K.É. Gabányi, T. An
First author’s institution: FOMI Satellite Geodetic Observatory, Budapest, Hungary
Not to be confused with their edible, terrestrial counterparts, hot DOGs (hot dust obscured galaxies) are hyperluminous galaxies first detected by the WISE mission in the infrared. Most of these are located at a redshift of z~2-3, which also corresponds to the peak era of star formation in the Universe. These galaxies are heavily obscured with dust and gas, and are thus difficult to observe in visible wavelengths. However, they do emit detectable amounts of radio and microwave emission. Multiwavelength data suggest that hot DOGs contain active galactic nuclei (AGN), and are believed to be a brief phase in the evolution of galaxies transitioning from starburst to AGN dominated phases.
In this paper, the authors observe four different hot DOGs (which have been previously identified by the WISE survey) using very long baseline interferometry (VLBI) on the European VLBI network. In this kind of interferometry, a radio source is observed from at least two widely separated locations on Earth, and the resultant signals are combined with the time delay information of the observations. This produces an image that is equivalent to an image collected by a telescope the size of the maximum separation of the detectors. Interferometry is especially important for radio astronomy, as the long wavelengths of radio signals require a very long baseline to achieve a reasonable resolution due to the effects of diffraction.
The radio emission from these hot DOGs is not significantly affected by dust obscuration. Given that this relatively weak radio emission is detectable from such high redshifts, the most likely candidate for the central engine driving the emission is an AGN. Fig. 1 shows the radio contours of these objects.
In two of the observed hot DOGs, the measured radio flux density is much smaller than the total flux density (previously measured by the FIRST survey). This suggests that only a part of the total flux is detected by the observable field of view of the VLBI and that the object is extended on larger angular scales. This discrepancy in flux density is presumed to be due to star formation and AGN activity that is occurring on angular scales larger than what the VLBI can probe. The authors infer a star formation rate from these measurements that reaches up to thousands of solar masses per year. In comparison, our Milky Way is believed to form about 1 solar mass per year. These observations are consistent with the idea that hot DOGs host both star formation activity and AGN activity.
Hot DOGs are important clues in understanding the onset of star formation and AGN accretion activity at high redshifts, as these two phases are believed to be initiated when gas-rich galaxies collide and merge. A galaxy merger provides an impetus for star formation to peak as gas is compressed to higher densities, and the abundance of gas also feeds the central black hole engine. To better understand the transition between the starburst-dominated and AGN-dominated phases of these objects, we need higher resolution radio data from future observations to resolve the spatial structures of these hot DOGs.
November 03, 2015
Film Here we are, the year of my birth. Any film beyond here I have to have seen retrospectively, though that's of course true of everything from Pete's Dragon onwards. Apparently I've seen more films than most but I expect there'll still be some moments when I'll be trying to make a value choice between some storming classics, narrowed slightly by never having seen some of those classics yet. I've only seen half of The Godfather Part II for example, thanks to having worked through half of the tv mini-series created by Francis Ford Coppola in which he re-edited all of the footage from the trilogy into chronological order. Having not seen any of it before, the chronological messing about became too jarring so I put it to one side assuming I'd be back after seeing the films in their original version and still haven't gone back yet.
But there's potentially an irrational feeling of loss when watching non-contemporary films, a synthetic quality perhaps, because time has stripped them of their contemporary context. Watching a film like Chinatown years after production, especially if you didn't personally experience the moment in which it was made, means that you can't really understand what it would have been like to have seen the film as it was released. Would it have made a difference? In ten years when people catch Me and Earl and the Dying Girl, will they even remember that there was a European refugee crisis? Are the two even related? Nixon resigned on the day Chinatown was released in the UK. Was that really at the back of people's minds as they attended the cinema that weekend and did it really change how they viewed the film?
Told you it was irrational and I don't think it works that way. If there's an overall thesis so far in this series, it's that the wider world very rarely intrudes on the experience of watching films and that it's usually who we are as people and that context which effects on our viewing of the film. The BBC Genome reminds me that my first viewing of Chinatown was a timer recorded VHS of the 2nd April 1994 broadcast which was the first in Cinemascope (still something of a novelty for television twenty years ago). It was an Easter Weekend so I would have been home from undergraduate uni so in a position to set the video. Within a few days the Rwandan genocide began. Kurt Cobain died and co-incidentally so did Nixon by the end of the month. Only Nixon could go to to Chinatown.
Perhaps I should be more cynical when watching film documentaries, notably dvd extras self-publicising the film they're supporting, which talk about how they "came out of the time" and were part of some great vanguard or zeitgeist. But it's easy to get caught up in photo and archive film montages of, in the 70s, flares and disco and Vietnam. Some films clearly "came out of the time" but did Chinatown and to what extent did contemporary audiences notice? Robert Towne based his screenplay on an incident which happened in the early 1900s. There's also a thematic cynicism in relation to authority but does this mean Nixon? Probably not. Glancing through many interviews with Towne he barely talks about the film in those terms.
In other words, films are best taken on their own terms unless the historical context is obvious and I should get over myself. I've been lucky enough to have lived through a third of the time that cinema has existed and only seen a limited number of films in context anyway. In any case, we watch every film retrospectively to some degree anyway. Studios choose release days based on a range of factors and some films (Kenneth Lonergan's Margaret) don't find themselves in front of an audience for years. Even "live" television broadcasts have delays in fractions of seconds. If anything, watching a film retrospectively teaches us a lot about the time in which it was produced, even a period piece like Chinatown. But let me save that discussion for another time. Eighty odd films to go until the dawn of cinema.
Title: Compositional evolution during rocky protoplanet accretion
Author: Philip J. Carter, Zoë M. Leinhardt, Tim Elliott, Michael J. Walter, Sarah T. Stewart
First author’s institution: School of Physics, University of Bristol, UK.
Status: Accepted for publication in Astrophysical Journal.
The Earth is weird, and Mercury and Venus too!
The Solar System’s terrestrial planets are really not how we expect they should be. Why? Because we think in the very early stages of their formation they were assembled by the accumulation of planetesimals – myriads of small rocky bodies which formed from the dust in the protoplanetary disk, a gas-dust mixture of interstellar material, which was rotating around the young Sun in the first few million years of its lifetime. The drawback of this idea is, that some elemental abundances in Earth, for example the ratio of iron to magnesium (Fe/Mg), are different from the one of chondritic meteorites – the ones we think are the original matter Earth formed from! That’s too bad, usually you would assume that if you pile up a bunch of very similar rocks, the average content of the pile is the same as every single rock.
However, smart people came up with the following idea: If all these planetesimals swirl around the young Sun they will probably hit each other. And if they hit and bounce, some of the material will be transferred from on to another. Or you even have disruptive collisions, where all material becomes dust, because the impact is so energetic!
Additionally, planetesimals probably had a very distinct structure. In their interior they featured a very massive (iron) core, surrounded by a “mantle” of less dense material (silicates). Now, if you have a collision of two bodies, one of them probably “steals” some mantle material from the other. For example if you have a so-called “hit & run”-collision, instead of head-on. This redistribution of material and change of core-to-mantle ratios is called “collisional stripping” and possibly determines the final compositional abundances of the planets.
Dynamical history of the Solar System – with and without the “Grand Tack”
The aim of today’s paper is to test whether this collisional stripping can actually account for the specific abundances of the Earth. Carter+ therefore run computer simulations of the dynamics of the planetesimals in the young Solar System in two different flavours. One takes into account the possibly dynamic behaviour of Jupiter shortly after its formation (the “Grand Tack” scenario), the other neglecting this. Below are two videos of the dynamical evolution of two of the simulations. We see inclination i and eccentricity e of the planetesimals plotted against the distance to Sun a (in AU, astronomical units, the distance Sun-Earth), the color indicates from where the material in the planetesimal originates (red inner disk, blue outer disk), the size of the dots corresponds to their mass, and the inner black dot represents the core fraction. They all start with the same core fraction.
The first video above shows how the outer planetesimals are scattered by the gravitational influence of Jupiter. Especially the inner planetesimals grow and grow. At some point Jupiter migrates inward and outward again, violently scattering and disrupting all planetesimals, causing a lot of mixing of outer material into in the inner parts. The simulation stops after ~21 million years (the red value, ignore the black one), at which time the biggest bodies are found in the very inner parts, a sea of smaller objects is found inbetween 1-4 AU and some smaller ones are at around Jupiter’s location.
This second video shows us the differences to the Grand Tack-scenario above. Because no Jupiter is around there is no scattering. Therefore, the dynamics are much calmer and not much scattering occurs. Instead, the planetesimals grow relatively equally at all positions. A striking difference is the composition. As no planetesimals are outside of 1.6 AU and none are scattered inward, no material is incorporated into the bodies in the inner regions.
Compositional evolution by collisions, stripping and merging
All of this has one reason – to find out how the dynamics and collisions define the compositions of the later planets. To get an idea of how this is going, the origin of the material at the end of simulations is illustrated in Figure 1.
The Grand Tack simulations show a lot of mixing between the different zones of the disk and therefore incorporate a lot of material from the outer regime. This is because the migration of Jupiter causes a lot of scattering among the smaller sized planetesimals and thus influences their orbits – they migrate inwards and collide with planetesimals there. In contrast, the calm disk simulation shows much less mixing. However, since the calm disk scenarios are truncated at 1.6 AU, it is unclear whether these two effects can be directly compared. But what about the Earth’s composition now? Can the collisional stripping and subsequent mixing of the material account for the shift in composition? See Figure 2.
This shows us that in the Grand Tack scenarios it is very well possible to fit the requirements of Earth’s composition! However, the calm disk scenario does not seem to produce such an attractive outcome with regards to Earth’s values.
All in all this is great news! In principle, the collisional stripping model can account for the strange abundance shift from planetesimals to finally assembled planets. Even as the final embryos in the simulations are not yet planets and they still need to go through the giant impact phase (where the biggest bodies hit each other), the initial conditions for attracting results are set. Additionally, the large scatter in core fractions (look at the two videos and see that there are many bodies with core fractions close to 1 and close to 0) strengthens the idea that iron meteorites, which are mostly composed of iron-nickel, are in fact the fragments of differentiated planetesimal cores!
Note: All videos are linked from the page of the first author, Philip J. Carter. See http://www.star.bris.ac.uk/pcarter/comp_evo_15/.
I have a newly updated scale comparison graphic to share: all the round worlds in the solar system smaller than 10,000 kilometers in diameter, now with added Pluto, Charon, and Ceres.
Planetary scientist Ralph Lorenz briefs us on the current state of our knowledge on dust devils on Earth and Mars.
November 02, 2015
Soup Safari #54: Wild Mushroom at the Piazza Cafe in Liverpool Metropolitan University. by Feeling Listless
Lunch. £3.75 (85p for crust bread). Piazza Cafe, Metropolitan Cathedral Steps, Mount Pleasant, Liverpool, Merseyside L3 5TQ. Phone:0151 707 3536. Website.
TV Back in mists of time which look somewhat like the 90s, author Paul Magrs produced one of my favourite scenes in all of Doctor Who in his novel The Scarlett Empress.
For reasons suitably too complex to explain here, the Doctor is trapped by a flock of birds who will only be satiated if he tells them stories and so influenced by cultural historian Vladimir Propp's Morphology of the Fairy Tale, he offers them a set of typical items which appear in the kinds of adventures he has which they can then use to make up their own.
I may have mentioned this before.
Yesterday, Paul expanded the idea on his blog and has produced one of the best piece of writing about Doctor Who I've ever seen, laying out, in some detail, the processes one must go through in order to produce a decent Doctor Who adventure with all of the necessary elements.
To an extent it's a structure for making good drama in general, but it nicely captures just how mad and eccentric and beautiful Doctor Who can be at its very best and worst and how you really can do practically anything with it but that it's also still possible to make a complete hash of the thing also if you don't know what you're doing:
"The Doctor is captured by the enemy, doesn’t even try to escape, generally larks about until they show him their doomsday device which, depending up the relative sophistication of the story, can be either a machine that looks like a teasmade or a long, impossible explanation of the whole season’s accumulated storylines. The Doctor will stare in outrage and slight bafflement either way."See what I mean? It's brilliant. The NOTE ON VILLAINS is especially superb.
The Housing Authority of Baltimore City (HABC) finds itself once again in a bit of hot water after 11 women signed on to a federal lawsuit alleging that Housing Authority workers demanded sex for repairs in public housing. Maryland Council 67 of the American Federation of State, County and Municipal Employees (AFSCME) has called for an investigation into its own workers who may have been involved, and subsequently so has the Housing Authority itself. However, according to some reports, the HABC investigation comes a bit late, as
“In the middle of the harassment [by HABC workers], [a] 24-year-old mother went to work at HABC. She eventually told her boss, Deputy Executive Director Anthony Scott, about Clinton (aka Charles) Coleman’s alleged sexual demands. In May, according to her affidavit, she told Scott about “Coleman’s repeated sexual harassment of me and about the fact that I do not have any heat as a result.” Next, she says, an audit investigator, Reggie Scriber, called and eventually “told me that he could not guarantee my safety if I continued with my complaint.” So far, 11 women have sworn affidavits against Coleman, Doug Hussy, and Michael Robertson, saying they routinely pressured them for sex, assaulted them, and refused to do needed repairs if they did not give in to their sexual demands. The women say they tried to get supervisors to intervene, to no avail. Two men—union investigators—say they found other female residents who tell similar stories, and that they were also told by HABC management to not investigate and not put anything they found in writing. “
One of the city employees who actually attempted to investigate these complaints lost his job for the effort.
In the wake of these allegations, Housing Commissioner Paul Graziano remains in his position as head of HABC. To Baltimore insiders, this is no surprise. Commissioner Graziano has been head of HABC for 15 years. He’s Baltimore City’s highest-paid city official, raking in over $200,000 annually. And it’s hard to say that he’s not at fault for this current scandal: last year, he eliminated the position of Inspector General within HABC, the office responsible for investigating and rooting out corruption in city housing. But Graziano isn’t worried. This is the man whose administration has, in coordination with Mayor Rawlings-Blake, refused to pay out court-mandated judgments to the families that he allowed to live in lead-painted homes. He’s weathered the storms of federal inquiries about his mismanagement of tens of millions of dollars of federal housing funds, questionable ties to the realtors and landlords he’s supposed to regulate, privatization of public housing, an arrest over a drunken anti-gay tirade, and dozens of other scandals in his 15 years as Housing Commissioner.
Calls are currently coming in for his firing, from new anti-establishment City Council candidates trying to make their mark in the post-Uprising scene to old friends of his trying to get a little distance. Former Mayor Shiela Dixon, who is running again for the same seat after being ousted for stealing gift card donations intended for charity, has called for Graziano’s termination at a town hall meeting, but the citizens of Baltimore must remember that he was still Housing Commissioner under Dixon’s administration, during which time he promoted Dixon’s then-boyfriend to an important decision-making position. Of course, with his own girlfriend getting a job as a high-ranking official for EBDI, another major city housing project, we might look at the Dixon deal less as nepotism than as standard operation procedure for the Commissioner.
Commissioner Graziano is, of course, the story here. But maybe he shouldn’t be. Graziano is a symbol of the system of Baltimore politics, a system of nepotism and cronyism where the goal is not achievement but staying power. The Baltimore Sun covered his ups and downs in 2011 in a long-read article, years before the lead paint or sex-for-repairs scandals, and their article focuses on what Baltimore politics considers the most important element of his reign: that Graziano is “a survivor”. No matter that he’s a scumbag and he survives solely through his network of cronies, that’s not what’s important in Baltimore politics. What matters is the ability to weather the storm, and in that respect Commissioner Graziano is a pro.
It seems that Commissioner Graziano is to Baltimore Democrats as Lord Ashcroft is to British Tories: I’m not saying that Martin O’Malley ever stuck his penis in a dead pig’s mouth, but one has to wonder how Graziano managed to survive a series of pretty nasty scandals under O’Malley’s Mayoral tenure and, in the face of increased media reports and public pressure to oust him this time, how he got O’Malley’s protégé Mayor Rawlings-Blake to defend him and declare she is “absolutely not” firing him. Sure, it could have something to do with the tens of thousands of dollars he has donated to the campaigns of Mayors O’Malley, Dixon, and Rawlings-Blake, but even for a major contributor, and even for Baltimore, such survival is abnormal, and one is brought to wonder what dirt Graziano has on the power players of the city.
Is Housing Commissioner Paul Graziano Baltimore City’s Lord Ashcroft? by Zachary Gallant is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The post Is Housing Commissioner Paul Graziano Baltimore City’s Lord Ashcroft? appeared first on The Leveller.
November 01, 2015
Film As part of the BFI's new Player+ role out, they've asked Kermode to choose films from their catalogue and record introductions (here he is front a trailer for the service). This sounds like it's going to be a weekly occurrence and if they continue to make them accessible to anyone, an excellent recommendation engine even for people who don't have a subscription. Mark's talked before about providing similar intros to horror films on Channel 4, but this is sure to drag the net wider and with the shooting in front of a white background, this is very much in the spirit of Moviedrome. See also the short lived film club from his own vlog. The concept of BFI Player+ is fine although there are no television apps yet and I'd be more interested in being able to access the contents of the BFI Mediatecque home.
October 31, 2015
As you can see, I got lent a G320 servo drive to play with.
This time I put it into a slightly more sophisticated test regime than with my leadshine driver experiments involving sending 8 step pulses 1800 duration (except the first one which was 600ms duration) [in purple], then giving it 12 forward pulses from the encoder with duration 360ms followed by 4 backward pulses from the encoder of 1800ms to simulate an overshoot [in cyan]. The PWM voltage out is in yellow.
You can see opposing denser and weaker voltage layers in opposition as the driver delivers more of 24V+ and less of 24V- in equal measure in response to the inputs.
To be clear, the purple spikes are the drive pulses, and the cyan spikes are the simulated quadrature encoder pulses. They have to add up to the same amount over time or the driver assumes there is a drift and eventually cuts out when it gets to too much error. (The thin vertical white lines are 200nanoseconds apart.)
The unexplained phase half-way along the cyan pulse is actually from the encoderB pulse that is not wired up to the oscilloscope.
The PWM from the driver has an average period of 60microseconds with standard deviation of 4microseconds. The horizontal red line is this average value, and the white line represents this period over the 19.1333milliseconds (the gaps between vertical red lines) over which the simulated input cycle repeats. The yellow line is the relative PWM proportion calculated by measuring the timeHigh and timeLow and plotting timeLow/(timeHigh+timeLow) with 50% being on the horizontal red line. The green line is the cumulative energy, which is ascending because we are only moving in one direction.
The variation in the PWM period length must be an implementation issue, as the motor isn’t going to notice it. Only the PWM proportion is going to matter, and I can see some asymmetry in this.
The sawtooth in the middle of the picture must correspond to the 8 step pulses, the first of which is 1/3 the speed of the others, which is why it’s a double spike.
Then there’s a sort of drift as the 12 encoder pulses come in to confirm it. The 4 correcting pulses cause similar spikes to the driver spikes, but somehow the device manages to do them without messing with the PWM cycle time.
The key finding is that the duration of the spike in the signal has little correlation with the encoding error. It’s just a spike of energy lasting less than a millisecond, while the actual encoder “response” doesn’t come in till 9milliseconds later. So we’re not setting a voltage and waiting till we come to position. It’s as if there’s a big jolt on each driver pulse to get the thing moving.
I don’t actually know if this does move the motor or not. You’d think that motors would have very different responses, so it would be a challenge to build an effective driver like this that works on all of them with only two effective trimpots of tuning.
Twiddling the gain trimpot doesn’t make any difference on this scale, but the damp trimpot reduces and broadens this energy spike. I can see so many potential advanced parameters affecting the behavior, but none of them are exposed to the user. Things just kind of work okay, as they do in the world of CAM software where fixed parameters are hard-coded in every place with made up values that never get reviewed because there is no process to do so.
I can’t say whether building our own servo driver is a good idea or not. The observations here don’t fit the homebuilt driver design I read about as the voltage is set too dynamically and not on a 510microsecond cycle.
It’s probably worth doing some various PWM experiments on the servo motors to detect their response times. That is finding out how low the frequency can be for smooth movement, and so forth. This will determin the viability of running an update cycle with an integral pulse count for each window. Or maybe there’s some merit in providing advance notice of the pulses so it can change the voltages before they “arrive” instead of always following it. Later.
I have written in the past about how having richer information channels is letting us overcome various historic dichotomies. For instance we had to distinguish between private and public companies because disclosing information was costly. We relied entirely on the signaling of degrees, but now in many fields people can easily show their work directly. The worker-contractor distinction is similarly the result of not having enough realtime information sharing.
There was a time when the overhead of tracking every work hour was high and at that time it made sense to treat employees who normally work for one company for a long period of time differently from contractors who move around a lot. Today the cost of metering work is virtually zero. The same goes for paying taxes frequently as opposed to once a year or possibly quarterly.
Since there is no clear distinction between workers and contractors, we had to come up with a set of criteria for drawing the dividing line. Among those criteria was a notion of who controls which tools are used to carry out the work and who pays for these tools. According to the law you are more like an employee if the company requires specific tools which it provides and more like a contractor if you are providing your own tools.
In a world of digital technology of course the tool itself becomes largely irrelevant. It doesn’t matter if you are using an iPhone or an Android phone. Or if you are on a MacBook or a PC Laptop. All computers can do the same thing. Instead of the computer or smartphone what really matters is whether or not you own the code that is executed and whether or not you control the data.
So if we are going to maintain the dividing line for now, then a key consideration should be who controls the code and owns the data. If the company owns the data of the service and you can only provide the service if you are executing the company’s code, then this should be considered a factor that points towards employee status. Conversely if companies provide an API so that the work can be performed using one’s own software and one has access to one’s own data then that should point in the direction of contractor status.
In the long run though our goal should be to do away with this artificial distinction between employees and contractors also. If you had a Basic Income then one wouldn’t need payroll taxes at all and income tax could be collected as people earn additional income (any dollars earned above the basic income should be taxed). There would then be no need for a complicated yearly tax filing process and also no working capital problem for government.
All of this kind of thinking is premised on the principle of “don’t automate, obliterate” – too much of what is currently being debated in the policy realm is about automating existing processes and further enshrining categories that made sense historically but no longer do. It is time to figure out what government and society can and should look like now that we have digital technologies.
Dawn has completed another successful campaign to acquire a wealth of data in its exploration of dwarf planet Ceres, providing our clearest and most complete view ever of this world.
Members of The Planetary Society staff visit the new "Journey to Space" exhibit at the California Science Center in all its wonder.
TV “You're middle-aged, that's what it is. No offence. Everybody middle-aged always thinks the world's about to come to an end. Never does.” Yes, thank you evil Zygon clone of Clara. But you try turning forty-one. All day I’ve been considering the implications. Forty-one’s a weird age. It’s not a milestone like forty and it doesn’t somehow unlock the secret of life, the universe and everything as I expect forty-two to do. You’re just sort of verified as being in your early forties. About the only revelation I’ve had is that my favourite colour’s electric pink after having tried to convince myself for many years, probably because of societal norms, that it’s royal blue. That's resumably why I also love Cath Kidson shops as much as I do even though I have barely a use for any of it.
Today has also brought presents which apart from a Frozen bluray, soundtrack cd and 2016 calendar also included the first Doctor Who episode broadcast on my birthday which I’ve actually been able to watch, not having been in this world in 1964 when Planet of Giants was broadcast. Thankfully Peter Harness’s The Zygon Invasion wasn’t a disappointing end to an otherwise brilliant day, but the first story this year and I think in a few years, which made me gasp on several occasions and had me applauding at the end. Which is why I’m happy to spend the final couple of hours of today writing about it although as we both know, if I didn’t do it now, the results would be even more disastrous than you're about to witness. Best get all of this out now before I’ve really had a chance to think about it.
Bold and necessary opening. Anyone else wonder if there’s a reason why there are all of these reminders of the show’s past, recent and otherwise? It’s potentially necessary here to somewhat explain why there are twenty million Zygons living on Earth for casuals who haven’t rewatched the fiftieth anniversary since it was beamed into our retinas through screens large and small and might wonder why anyone would do such a thing, but every story this year has featured some kind of flashback. At first I thought this was part of the non-celebration of the tenth anniversary of the revival which isn’t happening, except the augmented publicity shot of Bill wiping his glasses from The Celestial Toymaker hanging on the wall in the UNIT Safe House makes it seem as though they’re trying fit every incarnation in somewhere.
Before heading off into the more deliberate discussion of the rest of the episode, let’s consider why this might be. Partly it could be to give Capaldi a more subliminal version of the montage which provided Matt Smith with his entrance in The Eleventh Hour, something he was denied. Or it’s a forward reference to an upcoming episode which explains his cameo in The Day of the Doctor (because Moffat hates loose ends). Or someone in the production team is playing Packham and trying to fit them all in somewhere before the end of the season. Either way, we await the McGann cameo with great interest (and I haven’t entirely disregarded the notion of the actor himself turning up in some capacity). Unless the Doctor’s Amazing Grace solo is an astonishing inside joke.
Anyway, back in the episode, we have an all too rare example now of the television series underpinning its japery with some contemporary real world allegory, in this case religious fundamentalism with direct hits made towards ISIS in particular, the terrorist organisation rather than Sutekh’s sister. If Aliens of London made fun of the apparent intelligence that which led to the war in Iraq, here’s The Zygon Invasion ten years later considering some of the consequences and somehow also commenting on a refugee crisis which hadn’t even gained publicity when the story was in-production. With Spooks and W1A off-air, someone has to do this sort of predictive texting, I suppose.
Given how well regarded the Zygons seem to be amongst the public (and David Tennant) it’s surprising that this only their third proper television appearance, although their spin-off mythology is deep, spanning videos of the kind which aren’t allowed to mention which series they’re based on, novels, audios and comics. Just look at the Datacore entry, through not too closely because there are plenty of spoilers for excellent stories worth tracking down, notably the Eighth Doctor trilogy of which some of tonight’s episode is particular reminiscent especially about how they might integrate into our society. Like the Daleks they’re gifted with an particularly distinctive vocal sound, all gurgling consonant, which even on audio immediately brings to mind their physical form.
With so much of this series being influenced in the 70s show, it shouldn’t be too much of surprise to find Harness channelling Hulke and Holmes, though not having read any pre-publicity, I don’t know the extent to which we’re seeing the writer’s political leanings inhabiting his scripts or if he’s simply utilising this “stuff” to help underpin the drama and place it within a understandable context. Plenty of the dialogue is on the nose though, especially from the Doctor who notes at one stage that if they start bombing even these interlopers it has the potential to radicalise the lot. There is not much here for the Daily Mail to enjoy. “Well you can’t have the United Kingdom. There are already people living there. And they’ll probably think you’ll want to pinch their benefits.” Bdum-tish.
If the episode’s approach to thematic underpinning and actual source material is of the 70s, the structural twist is that in an era which has otherwise attempted to ape the pacing of that era with long scenes filled with lots of dialogue in single locations, this is a story which instead sees a return to the pacier crosscutting and action orientated plotting of earlier period of the revival before the show took a Deep Breath. At one point there’s three whole sub-plots running simultaneous with secondary characters, including, as it turns out, a Zygon officer (albeit in a form which isn’t obvious until the end). If the Capaldi era so far, for the most part, has felt a bit cramped and claustrophobic, this is the show explaining where the budget’s been for half a season demonstrating that it can still look like a feature film if it wants to.
How deliberate are these choices? Who knows, but the effect is startling, luminous. This feels of a piece with the likes of the aforementioned Slitheen fest and the Sontaran showdown from series three, and not just in its use of doppelgangers and one particular in-joke about a certain “controversy”. The Davies era in particular had this kind of inter-seasonal variety within its DNA, with Midnight and Turn Left emerging one after the other and none of which is to say that next week won’t simply be about Twelfth going Jack Bauer on a Zygon for half an hour in the UNIT equivalent of Guantanamo until he’s discovered the location of the real Clara (epic to small being a typical two-parter arrangement in the revival). Perhaps if the change of pace hadn’t been quite so deliberate back in the days of last year many of us might not have felt quite as disorientated (though having the Doctor act like a complete arsehole didn’t help much).
It’s also true that with the exception of the aforementioned axe-habit, the wilder excesses of the Twelfth Doctor are toned down to the point that he’s almost the fabled generic Doctor in places, showing a remarkable amount of courtesy and decency. The story would have worked equally well with the Eleventh Doctor probably although as with Time Heist, it seems strange when he doesn’t intervene to stop the UNIT soldiers from being kidnapped and killed by the Zygons, though it’s also true that geographically he is much further away from them than the poor sod who had his brain eaten. Once again Capaldi seems very happy about this, relaxed and on his game. No wonder he doesn’t seem to be in much hurry to move on having apparently signed for his third season.
The Time Lord's attitude to Osgood’s interesting though, oscillating between wariness and warmth. The treatment of her return is especially well handled, helping to shore up the idea that not all Zygons are evil or at least no less capable of evil than humanity by making the notion of whether the Zygon or human version died at Missy’s hands largely irrelevant (although I think it’s implied that as fans assumed it was the former). Ingrid Oliver continues to impress. Even in her introductory story, a character that pre-publicity indicated was going to be the revival’s equivalent of Whiz-Kid, parodying fans, is instead a love letter to them, to us, and Oliver’s tapped into that, though it’s notable that her adorkableness has sharp edges on this occasion for obvious reasons. The question marks on her lapel have newer resonance.
It’s close to midnight so perhaps I should start closing out but not without mentioning the whole Clara business. The companion substitution here is handle much better here than in either New Earth of The Sontaran Stratagem with elements such as the conversation about Truth or Consequences which previously seemed like a fairly typically explanation for sudden companion insight ala 42 now revealed to be part of the Zygon plan. The critics who simply don’t like Jenna Coleman won’t be convinced by any of her superb work here but the subtlety with which she indicates this different version on a second watch is startling. She’s both Clara and also somehow not. The look she gives on realising that she’s been uncovered is utterly mind bending, right up there with Sarah Clarke’s Nina Myers as she gazes through the security feed in CTU on 24.
Not all of the episode works; some of the points are belaboured and the mechanics of just how all of these Zygons could exist secretly within the population not entirely explained convincingly. But this is the kind of story in which everyone on the production team is engaged, especially Murray Gold, laying in a version of Clara’s theme just before her substitution, notes which go entirely unheard for the rest of the episode. Director Daniel Nettheim (whose previously helmed four episodes of K9!) takes full advantage of wide angle lenses in orbit of the Zygons, entirely unafraid to show us their rubber visages. Then it ends on a really solid cliffhanger. My guess is that Kate's playing fake Clara at her own game and pretending to be a Zygon. Let's see shall we?
October 30, 2015
Theatre Shakespeare's Globe's global tour of Hamlet, an attempt to perform the play in every country in the world hit problem with Syria, where, for obvious reasons they couldn't get permission or the insurance to perform. So instead they played to two hundred Syrian refugees at a camp in Jordan and The Guardian has a photograph journal of the moment:
"The children were keen to play and interact with the visitors, though this made the UN administrators nervous. Visitors are encouraged to stick to their itinerary. The mandolin provided a way for one of the actors, Miranda Foster, to connect with the enthusiastic kids at the camp - she made chord shapes with one hand while teaching them to strum the chords."Don't forget to hit the play button on the accompanying audio which has music from the production and actors talking to representatives from the camp.
Soup Safari #53: Broccoli and Parsnip at Café Rylands in The John Rylands Library. by Feeling Listless
Lunch. £3.75. Cafe Rylands, The John Rylands Library, 150 Deansgate, Manchester M3 3EH. Phone:0161 306 0555. Website.
This is not a post about Breaking Bad's brilliant Star Trek pie-eating contest scene, although the title would work.
Rather, it's about a pet theory of mine, which is that one of the reasons the matter transmitter is overlooked as an enduring and important trope of science fiction is because it doesn't have a cool name.
"Ray gun", and "robot" are examples of evocative nomenclature that entered common parlance way back when and stuck there. Alternatives exist, such as "blaster" or "android", but they're not universal. Everyone knows what a "time machine" is, if you want another great example.
So why not the matter transmitter?
I'll admit that this is a personal bug-bear. I've been selling matter transmitter stories since 1991, up to and including my latest novel, Hollowgirl. A couple of years ago I received a PhD for research into the trope, making me arguably the world expert on the subject. (Which is not to say that I am a complete authority, just that no one else has taken it on.) I'm currently outlining a non-fiction book called Traveling Light in order to explore the topic further, because I think it has interesting things to say about the evolution of science fiction and of science itself. The idea is almost a century and a half old, after all, and not much closer to becoming a reality than it was then, despite the convergence of 3D printing and scanning technologies. Maybe it's too fantastical for hard SF to deal with, or maybe the ramifications of the technology are too broad. Drop a working matter transmitter into your story, I'd argue, and everything changes. The trope is like a black hole, warping everything else around it.
Or maybe, as suggested earlier, it's just the name.
So where did it all go wrong?
From the beginning, seemingly. Edward Page Mitchell got there first in 1877 with "Telepomp", an ugly term that was immediately and rightly forgotten. It's possible The Space Machine might have worked (as Christopher Priest suggested much later), but H. G. Wells claimed that construction first, even as he nicked the idea for a machine that travels through time from the incredibly inventive Mr. Mitchell.
Subsequent attempts weren't much better: "lightning transmitter", "electrical transmission" and "etherical transmigration" all failed to stick, and "matter-sending apparatus", although accurate, lacked even a hint of prosody.
The trope's best bet, both in terms of catchiness and a high-profile champion, might have been Arthur Conan Doyle's "The Disintegration Machine", except for the fact that the story was slight, and the name only describes half the process. A problem fixed in the best-known story containing the trope ("The Fly"), but "disintegrator-reintegrator" was never going to catch on.
"Transporter" did catch on, thanks to Star Trek, but only within the context of SF. The word has too many other meanings. It's no "spaceship".
And so the problem remains. There are many names for matter transmitters. Everyone seems to have had a crack at it. The list below ranges from the evocative ("skelters") to the ridiculous ("transplat"). It was some years in the making, but I'm sure there are gaps, maybe even outright errors. If you find them I'd be grateful.
What would make me truly ecstatic, though, is a kick-arse alternative.
To paraphrase Arthur Machen, give a toy a cool name and everyone will want to play with it.
The list (in chronological order):
- Telepomp (Edward Page Mitchell, "The Man Without a Body", 1877)
- lightning transmitter (Tremlett Carter, The People of the Moon, 1895)
- electrical transmission (Clement Fezandie, "The Secret of Electrical Transmission", 1922)
- etheric transmigration (Benjamin Witwer, "Radio Mates", 1927)
- matter-sending apparatus (Edmund Hamilton, "The Moon Menace", 1927)
- super-radio (Charles Cloukey, "Super Radio", 1928)
- Nemor Disintegrator (Arthur Conan Doyle, "The Disintegration Machine", 1929)
- X-cast (Norman Matson, Doctor Fogg, 1929)
- Cosmic Express (Jack Williamson, "The Cosmic Express", 1930)
- matter transmitter (Leslie F. Stone, "The Conquest of Gola", 1931)
- beam transmission (George H. SCheer, "Beam Transmission", 1934)
- destinator (Charles B. Pool, "Justice of the Atoms", 1935)
- transposer (Murray Leinster, "The Fourth Dimensional Demonstrator", 1935)
- radio transporter (Arthur C. Clarke, "Travel by Wire!", 1937)
- Leggett-Heath Reproducer (William F. Temple, Four Sided Triangle, 1939)
- Radio Transit (Don Wilcox, "Wives in Duplicate", 1939)
- teleport (H. Walton, "Boomerang", 1944)
- Verdi Matter-Transmitter (Alexander Blade, "The Vanishing Spaceman", 1947)
- telesender (Frank Hampson, Dan Dare: Voyage to Venus, 1950
- Some of these authors are pseudonyms, which seems fitting given the title of this post.
- Only two of them are women. Care to speculate why?
- I have neglected computer games and fantasy. Maybe next survey.
- The line between teleportation and matter transmission is a grey one, I know.
Comics For all that I might say that I don't buy comics, here's the list of comics I'm currently buying regularly:
Buffy: The Vampire Slayer
Angel & Faith
Star Trek (the movie version with all its related spin-offs) (yes including Green Lantern)
Lois & Clarke
We'll return to the bottom one shortly.
You'll notice the lack of Doctor Who: that's mainly because there are so many releases that to keep up with them all would be a messy and expensive business even though some of them are really good I hear. I'll be picking up George Mann's Eighth Doctor series though, for obvious reasons.
For a while I did collect the SHIELD comic, for about eight months, until it became clear that it was simply going to spend its issues recreating bits of the television series within the 616 utilising their versions of the characters who aren't really the ones from the television, however much they attempted to re-engineer them a bit. I'm replacing it with Devil Dinosaur and Moongirl.
Nothing about DC new 52 reboot has interested me. As a Who fan I understand and appreciate the notion of renewal but in the 52verse, DC seem to have taken a group of characters who through some strange alchemy have worked perfectly well for decades and survived through successive reboots which have still kept their core elements and after throwing all that out, from everything I've read, introduced shadows of them.
But Lois & Clarke does the interesting thing of bringing back the original Lane and Superman and put them in that universe, still trying to help but keeping under the radar. The first issue does a good job of explaining this status quo and of making us quite certain that these are people who existed pre-Flashpoint (the event which sparked the reboot).
The professional reviews seem to like it and judging by the comments underneath, more regular readers seem to appreciate the return of these characters, salivating at the suggestion that perhaps they'll end up replacing the version that's in the 52 however unlikely that is. There's now the added problem that whenever there is a major disaster happening elsewhere, readers may wonder why this Kal-El doesn't pitch in.
They were reintroduced in the Counterpoint crossover event, a sort of parody of Secret Wars in which chunks of different realities appear on the same planet and the heroes therein fight each other, designed to cover publisher DC's house move to LA from New York and I realised as soon as I'd finished that opening issue that I wanted to read their first reappearance.
Comics chains no longer carry back issues. Neither Worlds Apart or my usual haunt Forbidden Planet keep issues older than at least three months outside of the inside of graphic novels or trade paperbacks. Apparently its because comics simply don't sell after three months other than the big titles. By then, people are simply waiting for the trades.
Options on Ebay and Amazon are expensive due to postage.
Then one of my contacts in Forbidden Planet pointed me towards Level Up, a games/music/comics shop in the basement of Grand Central Shopping Centre, the old Methodist Hall on Renshaw Street, the Liverpool equivalent of Affleck's Palace in Manchester.
It's basically perfect, everything you'd hope a comic shop would be, with its long boxes of back issues in the centre, older issues on the walls, retro games and dvds and well, yes, it's great.
Asking at the counter I was pointed to a long box containing complete sets and near the front were the Superman issues of Counterpoint at a price cheaper than they would have been on original retail.
Afterwards I promised I would talk about them on the internet and here I am plugging away.
Of course having read those back issues, I now realise I also need the main Counterpoint series to get the whole story and we're back to all the reasons I don't read too many comics. But at least I know now where I might be able to buy them.
Turning off and archiving our Slack channel was the final act in shutting down Contributoria, the project that’s been my main job for the last two years. All those Git commits, Rollbar errors, gifs of cats spectacularly failing at jumping, sign-ups, user activity, reading time statistics and so on gone, like leaves on the wind.
That’s the moment when I knew it was all over. But there were lots of smaller moments just before then that went towards shutting a start-up down.
When shutting something down on the internet there seems to be three ways to go.
- Get bought out by a larger company, burn everything to the ground and end up on the Our Incredible Journey tumblr.
- Fuck up your business model (or simply just not have one), burn everything to the ground and end up on the Our Incredible Journey tumblr.
- Acknowledge the project is coming to an end, spend time freeing all the assets (as Tiny Speck did with Glitch resources putting code and art into the public domain) or turning your site into a time capsule, the best recent example probably being ThisIsMyJam.
Back in the day some of us at Flickr would occasionally mull over the question “What happens if Flickr goes away? Once you become the repository of millions upon millions of photos, do you have a moral obligation to preserve them?” which in turn becomes the WWY!D? question.
And knowing exactly what the answer to What Would Yahoo! Do? is enough to want to do better.
With Contributoria offering to be the custodians of writer’s work, how can we do the best job we can to make sure we don’t let those writers down? So right from the start Contributoria was built to be shutdownable in the 3rd “nice way”.
Those not familiar with Contributoria and want to know what it was should check out the footnote “About Contributoria” below. (tl;dr a site where writers got paid to write, without being covered in adverts*).
While not a slick as ThisIsMyJam I wanted to take the time to make a few notes on what we did to not just throw everything away.
Shutting down, with code
Sorry for the next very obvious bit: decisions you take at the start of a project can make actions further down the line either easier or harder. The trick to success (aside from the more important luck) is to identify the further down the line stuff as best as you can to make well informed early decisions that’ll guide you there. The more stuff you build, the better at this you get.
I’m not saying that you should decided on day one that you’re going to shut down eventually and code everything around that, but to at least factor in the possibility. Fortunately shutting down coincides with a few good practices. These are what we used.
1. Feature Flags.
I first got introduced to these at Flickr and some of these are partly a function of that being a long time ago in Internet terms. A time where flipping between source control branches wasn’t as super easy as it is now and continuous deployment essentially meant that if you committed code it’d almost certainly go out into production later that day. Therefor you’d hide new feature or functionality behind flags that you could turn on with ease.
The beauty of this is that you’re not deploying code to launch new features, which still happens a surprising amount, but rather the code could have been waiting on the site, hidden for days (or weeks) and all you need to do it flip the switch when everyone is ready. By “ready” I mean your community team are poised to react in the forums, twitter, facebook etc.
Of course the best part about them is you can turn features off again, which we did quite liberally with Contributoria, switch non-essential bits off first, working our way up to the grand shutdown.
There were also a few things to turn on, Dean and Nat designed & build archive pages, the final homepage and various copy changes to the site to go along with various stages of shutting down. All that was left for me to do was flip those switches on at the right time throughout the last month.
Preparing for shutdown was a bit like launching a site & somewhat involved, but actually doing it was surprisingly anti-climatic, “BRB just turning the site off”.
2. Read only mode.
Along with feature flags we also had the Read Only Switch Of Doom for times we needed to do late night database backfills and migrations. Time where we really didn’t want writes to the database taking place.
This also disabled comments, backing proposals, profile editing and all the writey-things.
Which means we already had the code in place to stop any updates being made to the database. Giving us our final never to be changed One Database of Truth.
3. “Offline” read only mode.
Beyond read only mode, is the “OMG the backend can’t see the database at all” mode. Because we were a start-up our backend servers were in one place, while our databases were hosted somewhere else because sometimes databases are hard.
While running, the site would periodically save to disk “flat” versions of pages being served, saved in the context of a logged out user. If for some reason the backend servers couldn’t reach the database the site was built to flip over into read only mode and serve the local versions of articles, issues, user profiles, images and so on.
That last one is very dependent on the function and scale of your site and so isn’t for everyone.
Because we’d planned the site with several failsafe controls we had pretty much all the code in place to put it into its final archive state.
Nat took charge of the final step, writing the code that’d turn the whole site into static HTML that could be run locally, an archive file to be submitted to archive.org and so on. Wonderfully the tool Nat wrote was elegant, beautifully documented and designed to be ultimately run just the once. It worked perfectly.
With our archive FTPed (old skool) to a more lightweight server we could retire all the old code, servers, CDNs, analytics and database. Which, pro-tip: when you start a start-up don’t set up all the accounts against your own credit card, I was rather pleased about. The only additional thing still being paid for is the fonts, because you still want your archive to look stylish!
Shutting down, with users
A good while before we switched functionality off we let the users know what was happening. We also refunded the final months membership and payed writers kill fees for articles. The advance notice gave them a couple of months to delete any of their own content they didn’t want archived for the rest of eternity.
It’s almost as bad to suddenly tell users that “Hey, we’re going to archive everything you’ve done, and by the way we’ve turned logging in off” as it is to delete everything.
8 people did.
Shutting down, with archiving
As well as the site, Contributoria lived in a couple of other mediums. Members could download monthly kindle issues and we sent honest to goodness newspapers (lovingly put together by yours truly and published by Newspaper Club) to premium users. It’s odd looking back over the last two years to think that I helped to publish a monthly newspaper.
We decided to make all of those public, anyone can view and download the ePub/mobi versions of the monthly issues from the archive page. We flipped the papers to be embeddable and downloadable as PDFs from our Newspaper Club newstand, you can even buy them for old times sake.
Yes, even if the server dies there’ll still be paper archived copies of the best articles around, take that GeoCites!
In all it took about two months from announcing the shutdown to carefully bringing it in to land while letting users know what was happening. With press releases, a final print run in the 1,000s with Vivienne Westwood editing, a permanent record in website, ebooks, pdfs and newspapers form, I think in the world of shutting down we did pretty well.
You should totally try it sometime.
Footnote: About Contributoria
For those that don’t know what Contributoria was: very briefly it was a platform built for readers who want to support good journalism, and writers who want to be paid to write about subjects people want written about. It was often described as a cross between Medium and Kickstarter. The clean advert free reading experience with users backing stories they want to hear with cold hard cash paid to the writers. Although there was more to it than that obviously.
It was funded by the Guardian, as part of their ongoing explorations into alternative journalism models, with extra support and funding from (transparent) sponsorship.
To respond to the inevitable “Well, I guess it didn’t work if you’re shutting down” I’d say that running Contributoria has taught us that there’s a hugely fertile area out there, supported I guess somewhat by this footnote in Medium’s blogpost from the other day…
“We’re also starting to work on monetization features for authors and publishers. This is a green field project, and we’re approaching the problem space with open minds. We think the future needs new mechanisms for funding content, and we want to make sure our features incentivize quality content and value to the network.”
…my experience leads me to agree, there’s plenty of space in this area to pay writers without advertising. It was just the right time to close things down for various reasons.
I hope this doesn’t get us on Our Incredible Journey.
*The site, not the writers.
- Albert Wenger
- Charlie Stross
- Dan Catt
- Emily Short
- Fairphone blog
- Feeling Listless
- Janet McKnight
- Jeff Atwood
- Richard Pope
- Simon Wardley
- The Leveller
- The Planetary Society
- Think Justice
- Tom Darlow
- Vinay Gupta
- a sense of place
Updated using Planet on 24 November 2015, 06:48 AM