The advert wars

One of the pitched battles in this century’s cyber war is about advert blocking and injecting.

It’s in full flow.

You can tell – journalist friends complaining that ad blockers have killed Joystiq, a 10 year old gaming magazine; web friends complaining that an ad blocker charges advertisers to not block some ads.

I’ve got some perspective.

Back in 1996, I helped make one of the earliest web advert blockers, WebMask.

Why did we do it?

The early web was fundamentally non-commercial. It was built so as not to be CompuServe or The Microsoft Network. I often bought copies of Adbusters magazine, which unpicked how runaway consumerism harms our culture and our health.

Buy Nothing Day

Our advert blocker was a side project which quickly died. Fascinated with the different technologies that could be used to block ads, for years I maintained the most popular page listing such software.

A little later, I used the traffic from that to give instructions on removing all manner of commercial internet material.

The adverts inside web sites animate, flash and distract. Wouldn’t it be nice to get rid of them? They are not unethical like the other nasties on this page. Indeed, some consider it stealing from the site’s ad revenue to block them. Others either don’t believe in the need for such self sacrifice, object to adverts in principle, or say that since they never click on them anyway no revenue is lost. You decide.

A telling quote.

What’s happening now?

Everyone wants to control our adverts.

Google intercepts our very quest for information to take a sneaky cut – a tax – on everything that we buy online. Facebook uses our addiction as social primates to relationships to sell us corporate brands.

Content creators embed multiple tracking devices, sending our most personal information to complex morasses of robot hucksters.

Display advertising landscape

When the industry finds that people didn’t look at or click the distracting ads any more, they instead place adverts which look like camouflaged tabloid stories.

Or sod it all, just publish fake stories that are actually adverts. After all, what’s it matter, if all the real stories are just retyped PR churnalism?

Real time exchanges auction the pixels on our screen based on our profile, to sell us things we had tried to decide we couldn’t afford to buy. Or just to remind us that we think we’re fat.

My mother rang up the other day asking “what this Bing thing is”. She didn’t like it, not as good as Google. The routine update of some standard software had changed her default search engine. Despite being an experienced user of computers, she had no idea how to change it back.

I can’t even blame this on malware – her web browser’s maker does much the same thing.

Even the hardware manufacturers are at it – Leonovo using their power over our metal to inject new adverts into web pages (risking the security of online banking and shopping in the process).

Such a battle.

What’s the complaint about AdBlock?

As of September there were 144 million active AdBlock users.

It’s no longer a geek thing, like it was when I first blocked ads back in 1996. It’s upsetting some content creators.

In my view, the complaint here is that the users are finally trying to control their adverts. How dare they!

Sorry, but everyone else is trying to control our brains (you saw my list in the section above). Why shouldn’t we try to control them too?

This is a long cold cyberwar. As such it is “zero sum” – nobody is going to do well out of this. To expect me to act morally on behalf of the other combatants during a war has chutzpah, and is a futile expectation.

I’m not going to let advertising companies win the control of our information society by doing whatever they say as if that were morally pure.

Maybe if they had a good business model that helped me out. But they don’t.

Advert companies and advert-funded companies aggregate our private information without our knowing permission. They create insecure data vaults and comms channels, which then governments and criminals easily dip into. They secretly run psychological experiments on our social lives.

If you are going to do whatever you like on your general purpose computer (a server), then I’m going to do whatever I like on my general purpose computer (my laptop). Tough luck.

We can have a truce, but you need to parley first. All sides have to give things up.

Of course it is not that simple. Who controls the power of general purpose computation is one of the key hard challenges of our generation.

We’re not going to solve it in five minutes having a chat on social media. It requires the great casualities, then the concerted dipomacy that led to, say, the Geneva Protocol, which banned chemical weapons.

And even then…  Well, physical war hasn’t ended, has it.

Who is coming to harm in this war?

I spent a long time angsting about investigative journalism – how can it get funded on the Internet?

Digging into the world in detail is vital to society. I’ve tried to help by helping journalists use data – we have many journalist users at ScraperWiki to this day.

Adverts are not the answer – the kind of journalism I want doesn’t drive traffic. Take a look at the celeb soft porn of the (profitable) Mail Online front page. It’s very different from the newstand Daily Mail’s crazed politics front page. Telling as to what kind of journalism Internet advertising funds.

In my view philanthropy is the best business model we’ve got for good, socially worthwhile journalism. Look at the excellent ProPublica for an example.

[ Aside: Perhaps it always was the model – a long time ago in US cities, newspapers got their money from classified ads. They didn’t need to do socially useful exposés of corruption to keep that money, but their owners chose to anyway. They loved their city. ]

The other main victim is the smaller content creators. You can feel their pain in Matthew Hughes’s article about AdBlock on MakeUseOf.

If you use AdBlock, know you are still screwing over hard working people, because you can’t be bothered to be mildly inconvenienced. (Tweet)

I don’t have a great prescription for them. Use technologies that block advert blockers – preferably inviting your readers to unblock your site, or donate instead. Make sure your advertiser is one of the ones that pays AdBlocker (see below). Only show quality adverts. Try (again!) other business models.

My larger scale prescription for creators is – create standards! Have a proper RFC for a protocol for adverts, and get a standard system for distributed payment built into all browsers (BitCoin’s children will get there in the end).

These will reduce the costs, make the relationship more directly between producer and consumer, and get rid of the parasitical, big data optimising tech advert industry.

Forming the protocols would be a good peace conference.

Conclusion

AdBlocker, in its form which exhorts advertisers to pay it to let their adverts through, is providing value.  It is improving the quality of adverts with its acceptable ads criteria. Adverts shouldn’t obscure content, they shouldn’t fill up most of the page, they should be clearly marked.

I don’t think that’s blackmail – on the contrary, I think it is a fascinating step towards the terms which should be in the final truce.

Meanwhile though, some users either doubt that AdBlock itself hasn’t been corrupted by money, or want to carry on in pitched battle. They’ve moved on to forks of it that continue to block all adverts.

Others are more adventurous.

Brett Lempereur, following an argument in the pub in Liverpool, made Sadblock. It is a morally sound ad blocker. When it detects the adverts, it blocks the entire page – so you can no longer be accused of stealing the content.

If you don’t find even that kind enough to article writers, there’s one final ad blocking option. AdNauseum is a funky AdBlock expansion which loads all the adverts invisibly, and quietly fakes that you clicked on them.

Bliss – no adverts, and money goes to the journalists. (Something not quite right here – who’s losing out again?)

Of course, all that is just users taking control.

The ultimate control?

Moving on from it to form a new technology industry, with better business models, and a better heart.

P.S. The collapse of Joystiq which started this recent argument about advert blocking has a final twist. It has been resurrected as part of Engadget.

Which web development tools are commodities?

We’re really bad at thinking about innovation.

valuechain

To improve my own sense, I’ve been gradually absorbing Simon Wardley’s Value Chain Mapping since first seeing him talk about it a few years ago.

The picture to the right is an example of one of his maps. Each blob is a technology need.

As you go from left to right in the map, the technologies go from custom built, through product to commodities. (You can read more in an introduction by Simon, also see my post about the product/market space).

Anyway… The purpose of this blog post is to assess the state of play of various technologies a developer needs to make a new web application. Do we still have to make it ourselves, or is it a standardised thing?

I’m writing this while trying to build a web application (called MPCV in this post, to collect the CVs of candidates for Parliament). Since I’m doing it essentially for fun, I’ve got a very low tolerance of extra effort, so I’ve been pushing things as far right as I can.

Here goes:

Compute, Power, Web Server: These, and indeed Laptop and the whole of the rest of industrial civilisation, are at the Commodity stage. Or at least way over to the right hand side of Product.

Web hosting: You can just throw PHP scripts into a directory on a shared hosting account, or register a Flask app with a PaaS like Heroku or Google App Engine. This doesn’t feel like a commodity yet – there aren’t standard methods, the offering isn’t as clearly defined as, say, electricity. So kind of mid to right hand side of Product.

Source code: How programmers keep the product of their labour has a long history, but certainly feels like a commodity now. Pretty well everyone uses git for everything, with Github, Atlassian and Microsoft offering very similar hosting services. Left side of Commodity. The linked issue trackers don’t have strong standards yet and are hard to migrate between, so they’re over to the right of Product.

Web clients: With KHTML’s descendents in nearly every browser, IE6 pretty well actually gone, and web browsers in over a billion end user pockets, this is looking pretty good. Add to that the very mature low level libraries like jQuery and Backbone, and it is a place of dreams. Even video is nailed (I use the <video> tag on Redecentralize). They’re a Commodity.

Mobile apps: A warring duopoly, each with an identical feature set but you have to write your app twice. In different languages. I put the development tools for this in the Product side. Hopefully Firefox OS and/or the W3C will somehow force it into Commodity soon. For this reason I’m not worrying about this for MPCV, mobile web only.

Design templates: I’m using Bootstrap because we use it at ScraperWiki, but it has lots of competitors snapping at its heals. This is well over to the right of Product, getting on into Commodity soon.

Email sending: Firmly in Product. SendGrid and Mailgun are popular and work well. But you have to think about it, it is not like water. In some ways it is worse – on old Unix servers back in the low spam days, it was more of a Commodity.

User identity: The likes of Facebook and Google try to grab terrain here, both developers and their users are wary. There are a few products like Stormpath, none that great yet. Mozilla Persona is a tantalisingly close abandoned attempt – it at least met developers’ need to keep responsibility for their own users. In short, this area is still Custom built. Because that’s what everyone does, 2015 and still rolling email confirmations.

Developer identity: Every single one of the other products in this list requires you make a new user account, or these days one for each person working on the project, and enable and backup your 2FA codes. An unexpected frustration for me working on MPCV is that it is a throwaway project – it doesn’t have organisational boundaries yet. Sometimes I literally got blocked not knowing what to name an account. And I had severe limits on how often I wanted to add a credit card subscription for a short project. Heroku’s app store made me even more confused about this, and nobody seems to use “log in with Github”. This is at the Custom built phase.

Encryption: All good sites need to be over HTTPS these days. Getting the certificates is a rip-off Product. The Electronic Frontier Foundation are improving this with free certificates later this year.

Democracy data: For my purposes, mySociety’s Mapit (for postcode to Parliamentary constituency lookup), and DemocracyClub’s YourNextMP (to get the candidates in a constituency) are fantastic. It felt like Commodity territory.

I’m going to update this post as I come across more categories during development.

Conclusion

There are big big problems with identity. Both my identity and the identity of my users are taking up far too much of my time and attention. This feels like a core weakness – I hope Mozilla try again. I would attack here.

There are too many services scattered everywhere. I don’t think either Amazon or Heroku are doing a good job at bringing them together. They, or Google or Microsoft, will eventually.

Mobile apps are an embarrassing disaster. I hold little hope that there is an attack point against this, but who knows.

Promising to make software safer

1. Virtual bugs

I first really knew that all software was fundamentally insecure back in 2001.

I was working for an artificial life games company. We made virtual pets – amazing ones with a simulated brain, biochemistry and genetics.

Creatures Docking Station

I’d just built a new networked version called Creatures Docking Station. It let the cute, furry, egg-laying Norns travel through portals, crossing the Internet directly between player’s computers.

The game engine was built using the language C++. It was fiendishly complicated – neuron simulators for the Norn brains, a scripting language implementing all the in game objects. About 20 people had worked on it, with varying needs, skills and time pressures.

I knew that there were bugs in it. I’d previously stress tested the code – randomly mutating Norns and force breeding them with each other in a diabolical machine, while the game was running in a debugger. It found a new crash bug every hour – I’d tap Gavin on the shoulder and get him to fix each one. We never got them all.

The symptom – the game crashing occasionally due to a mutation – wasn’t itself a world shattering problem. No real lives were on the line. Bad user experience, but so what?

Two reasons:

1) In C++, bugs in this category let an attacker do anything they like. That is, much like a chain saw, with great power comes great responsibility.

2) With the new networked game, it would let an attacker do anything they liked, remotely and automatically from across the network.

In short, a player of our game could have their machine taken over remotely – their documents deleted, spam sent, their Internet banking password sniffed (not that many people used Internet banking back then). Whatever the attacker wanted.

At the time, there was no tool or technology or budget available for me to fix this. I did what every programmer did – closed my eyes. Ignored the problem. Hoped nobody would do bad things with it.

I knew though that nearly all general purpose software, particularly written in C/C++, was likely to be insecure.

2. A simple promise

Wind forward to 2015.

I’ve been worrying for a while about the long, cold cyberwar. A small part of that war is basic security of all computer systems – so it’s hard for criminals or rogue states to, say, remotely turn on your microphone without you knowing.

Linking this to my old experience with C++, and a constant flow of security vulnerabilities which could only happen to C/C++ code, I had the idea that as an industry we should stop using C/C++.

Peter had shown me how good Go is now (we use it a lot at work), making my historical needs for C/C++ now obsolete. Suddenly, it felt possible to completely stop using those languages.

This comment on Hacker News finally provoked me into actually doing something. I knocked up a very simple promise site in an hour on a Sunday afternoon.

I promise never to use C/C++ for a new project

If you’re a programmer, go on, go and sign it!

This seemingly simple promise felt like putting my head over a parapet during a siege. I learnt quite a lot.

3. Things I learnt

Embedded systems – lots of people wouldn’t take the promise because they work on embedded systems where C dominates. Others pointed out various rays of hope – Python for microcontrollers, OCaml for PICs, LLVM for AVR chips, embedded Rust, even Go for Arduinos (OK, not quite!). Are those going to be good enough, even if you have to direct code for cycle goals?

My view is that it is particularly important to sort this out now. Embedded devices are joining the internet more and more – even if you’re writing something which is standalone now, some other programmer will connect it to something in 5 or 10 years. I don’t want my physical devices to be easy to hack into. The pledge I’d really like embedded systems developers to take is to try using and improve on the new more secure toolchains.

C++ is secure now – a few people pointed out that C++14 can now be used with safe pointers and sanitisers. Others have proposed friendly dialects of C where you turn all the safe compiler options on.

In principle I’m up for this, but only if it is forced in an explicit language variant – otherwise someone will shoot themselves in the foot later. I’m not sure it is worth it in most cases, compared to using Go or Rust. Either way, legacy C/C++ code is the really big issue.

Go or Rust flaws – a few people don’t like them, sometimes for aesthetic syntax reasons, sometimes claiming they are hard to use. I don’t think C has a particularly great syntax – I can remember trying to learn it when I was 15, it wasn’t easy. Sure, if you don’t like them, pick something else. It doesn’t mean you have to juggle with chainsaws.

Of course, these new languages still have parts written in C, at least for now. There can always be bugs in their compilers and assemblers. I don’t think this is a big problem, as those parts are a much much smaller surface of attack – albeit a valuable one.

Application binary interfaces – what can we use instead of C as the standard ABI? Pretty well all languages in the “open source” world interoperate with each other via C bindings. If you took my promise, would you still be able to write Python bindings to an existing C library? Pretending we don’t need C is just fantasy.

This is by far the best criticism. Of course, the Java and .NET worlds have spent a decade building entirely new ecosystems which strongly discourage C bindings. So it’s perfectly possible. We will need something specific to use instead. I don’t know what it should be – this needs strong leadership, maybe from the Rust people.

Long, cold cyberwar

Berlinermauer

Let’s keep this post simple.

We’re near the start of a long cold, cyber war. Many things make this clear – from Stuxnet to Snowden, from the Sony hacks to Chinese DNS poisoning.

This is a hard time to be in information technology.

Just in raw, technical security terms this is tough – rebuilding every layer of computing infrastructure so that it is safe.

And that’s the easy problem.

The hard problems are emotionally and politically challenging: We have to prevent automated privacy invasion from creating new powerful fascist states. We have to keep the Internet competitive and innovative – a positive creative force.

To give a hint at how hard it is, here are three harsh yet promising articles on key subjects.

I don’t know how long this war will take. I’d prepare for, say, a century.

It took many times longer than that after the invention of the printing press for everyday ideas like copyright, the novel, universal literacy and the public library to settle down.

If it feels tough, that’s because it should be.

If, like me, you’re a programmer, the days of rainbows and unicorns are gone. It’s now about moral responsibility, professional integrity and the strategic creation of new concepts.

Let’s get to it.

I blinked and missed 6 exciting things in the last 20 years of space

I blinked.

A long, slow, twenty year blink.

And meanwhile, space exploration went… Phoooom!

From a distance it looks bad. We haven’t sent humans to the moon for over forty years. There’s no grand, visible, memorable showpiece – apart from space shuttles exploding and being decomissioned.

And yet, when I recently got interested again, I found a flurry of things had happened. Many I had seen in passing, but not really looked at. All together, they add up to something amazing.

1. Space telescopes

It’s simple. Without the atmosphere in the way, you get better pictures. Beautiful pictures.

Much-loved space telescope Hubble reached orbit in 1990. You’ll have seen its iconic photo of the Eagle nebula pillars.

But have you seen this one of young stars sparkling into life 20,000 light-years away?

There are endless more. I get lost reading the Atlas of Peculiar Galaxies and looking at Hubble’s top 100 images. Trying to imagine what they all really are.

2. Space station

You can’t even say this new space world excludes humans.

Since 1998, a station has orbited above us. Continuously inhabited.

Recently, the Internet gives this a new intimacy. You can follow Reid Wiseman, tweeting pretty constantly from space. 3 million people have watched this tour of the station by a departing commander.

Most immediate of all, the HDEV video project gives continuous pictures of the earth from the space station. Through your tablet you can stare, imagining you’re up there, looking down on our planet during a break. Every hour or two, snaking around the world.

3. Gamma-ray bursts

Every now and again we detect a fantastic, crazily strong pulse of energy, as much as the sun emits in its entire 10 billion year lifetime. Nobody knows what they are – perhaps neutron stars colliding with black holes?

To investigate this, astronomers made a series of increasingly powerful satellites, culminating in Swift, which was launched in 2004 and is still running now.

About once a day, Swift detects a Gamma-ray burst and sends its location out via the Gamma-ray Burst Coordinates Network. Immediately, telescopes all over earth and space swivel towards the burst, capturing its afterglow to try and learn more about it.

We don’t have very good photos of them yet. Although there are some gorgeous artists impressions.

The nearest compelling real picture is this one of a Wolf-Rayet star, which are a possible cause of the bursts.

4. Roving on Mars

You can hardly have missed that two chirpy robots have been wondering all over Mars since 2004.

This year, the remaining one finally outpaced the record for longest distance travelled on another world, previously held by a 1970s Soviet moon rover.

They’ve made major scientific discoveries about the atmosphere and geology of Mars.

For example, we now know lots about the water on Mars (there are even polar ice caps!).

More immediately, though, they’ve taken detailed panoramic travel photos of another world. For example, Opportunity snapped this one of the Victoria Crater. Hover over it and scroll right to see more.

5. Landing on a comet

It’s impossible to describe the Rosetta probe’s crazy journey, richocheting off four planets to gain speed. You’ll just have to watch this video.

Ten years in, Rosetta is now in a kind of jolty orbit round a comet. Comets are important as they may contain rock similar to in the early solar system, and they may have been the source of water on earth.

In this photo taken by Rosetta, you can see the dumbbell-shaped comet, and the faint jet of the comet’s tail which is just firing up as the comet heads nearer to the sun.

This month, Rosetta is going to release a baby probe which will land on the comet.

6. Baby universe

Maybe you’re tired of looking at the 12 billion year old galaxies in Hubble’s picture of the deep field, and want something a bit older.

Radiation from shortly after the universe begin, travelling for 13.7 billion years, finally reaches us each day. It’s from a time before stars, when the universe was a cloud of hot gas.

We’ve known about this “cosmic microwave background” radiation since the 1960s. It’s key evidence for the big bang.

Not content with low detail pictures, cosmologists have made a series of spacecraft. WMAP was launched in 2001. It lived way out, cuddled in a gravity well just between the earth and the sun.

After 9 years of observations, it produced this image. The colours represent temperature fluctuations which then grew to become galaxies.

Seeds in a photo of our baby universe.

An email to Nicholas

Dear Nicholas,

Thank you for your previous two letters. I’m sorry I was so slow getting back to you after the first one, that you had to write another.

I didn’t know Canon meant essentially the same things as Round. I’m sure I must have been told, but I never got what it meant or cared. I only really appreciated rounds at all in actually singing them with people at Kentwell (the Tudor recreation thing I do).

Amusingly, I look up the most complex round I know (it isn’t instant to learn, as the note of “well” sounds off, so you have to teach it to drunk people carefully), which is the original “cat is in the well”. It’s called Ding Dong Bell in Ravenscroft’s Pammelia. Where, so I just now amusingly founded, headed “Canons in the unison”.

While trying to find out what exactly round means just now, I came across what glees were originally, and glee clubs. I really wish they still existed!

The ground bass is clearly an important reason that I like Pachelbel‘s Canon. And it doesn’t vary in volume. Oh, and there are hardly any instruments – I think adding any more is basically a waste of time for me, as I’ll fail to pick them out.

Moving on to your second letter… It started to irk me right at the beginning – only at the end did you give me an easy way to articulate why. I wasn’t moved by Williams musical version of Christina Rossetti’s poem. Worse, I wasn’t even moved by the poem itself!

Doing as you say and describing my emotional reactions the first time I heard it…

The voice was irritating, overly oscillating such that I couldn’t pick out the words. It actually managed to make the poem harder to understand. There were some uplifting bits musical in the middle, but the tedium of the vocal parts overruled that.

As for the poem, my! It glazes my eyes over, making me simply not want to read it. It is full of metaphors that have no meaning to me. To such an extent that I’d have to force myself to read it as whipped homework to get anywhere further with it at all.

I am going to take your advice, to not try to “understand” music, and not do so :)

I agree with you that over analysis and understanding can defeat the joy of music. What it can do though, is breakdown practical barriers. I’d like a music recommendation service which could say “don’t bother Francis with Wagner, basically nobody with your low volume range of hearing ever ends up liking it particularly”.

For people who are good at music, and/or who have fallen deeply into one genre pool they can’t see out, these barriers are as fleas to a giant. To those in old people’s homes, or whose voices have just broken, or even who are deaf, and have had music torn for them often unknowingly… They are so important!

To slightly shift subject, I just got back from Bearded Theory. Three relevant musical observations from it:

  • To our surprise, we loved the ambient tent, at the right moments. Not being about love or sex was such a relief, the wilful suspension of the primate social, abandoned for rhythm, the raw dance. e.g. The Orb.
  • Revived acts, from The Stranglers to UB40, were just irritating. They had the odd song you knew, but they weren’t the same as when they were young, and if you didn’t like them already, you weren’t going to by seeing them live. This alone makes it worth supporting new acts, despite the cornucopia of amazing historic music we have at a click now. (Ironic, that contradicted by me liking for the first time ancient The Orb above!).
  • It’s fun playing the Ukelele and/or singing (or Kazooing along). Beardy Keef did a jam, managing to get half the famous musicians on site to turn up too. My strumming sucked, and I couldn’t instantly remember chord patterns after the first verse (they were unlabelled on the second)… But still, that Uke, it brings down barriers. Easier than a recorder or a piano to learn to that important stage of “have fun with” by far.
  • The Monster Ceilidh Band are great.

So yeah, I don’t need sophisticated analysis of music. (Although the part of me that wants to understand consciousness, and suspects music is a vital hack on our brains that will reveal a lot about them, is curious.)

Instead, I want analysis so people can have fun, without being put off by usability barriers that there are gorgeous ways round.

But there is a danger that we become distracted by such intellectual diversions in a similar way that one might become fixated by the form of a Sonnet while missing its meaning

It works both ways. To return to Dr LJ’s Tweet… Is everyone, even just in England, actually hearing Beethoven’s 9th? What’s the most efficient way to make that possible, in the cases where they would like it but just don’t have a way of getting to it?

Coincidentally I was at Bearded Theory with a music therapist (there are very relevant links to papers and things on the News and Downloads page!). Singing war solidarity songs to people with dementia… Makes sense to me.

And alas you need research, like in the paper Dr LJ linked to, to stand a chance at knowing how much to spend on nursing and how much on music.

Best wishes,

Francis

Properly funding Democracy Club

Democracy Club logoPolitics is broken.

At the last election, a few people made an amazing organisation to try and fix it.

Democracy Club is a non-party-political group of volunteers. At the next election, we want to hold candidates to account, and stimulate public engagement.

We do this by emailing people small, easily achievable tasks. These small tasks will add up to hugely useful resources. (Democracy Club about page)

7000 volunteers (in every constituency!) found out who the candidates are, what they thought about local and national issues, and monitored their election leaflets. (I wrote up what they did on the OKFN blog.)

Amazingly, another group of people is emerging who want to do it again. Better. I think it can have a real impact. However, to really have reach on a national scale, it needs money. I want Democracy Club to be a permanent national institution.

The question is, would you pay?

It’s really hard making a new revenue model work, there’s lots of risks. We could just run a Kickstarter to fund this General Election. Then, everything would collapse again come May 2015. I think this is too important – we should do local elections and European elections, and build up information, volunteers, media contacts between elections.

I want to pay monthly.

The question is, do enough people? Is it even feasible? To find out, I would very much appreciate it if you could fill in the short questionnaire below.

I’ll blog again [update: left a comment below instead] with the answers in a week or two.

Thanks for your help! Let’s fix our politics.

Irony of extensions that remove junk slowing Chrome down

I thought I was ruthless at not installing browser extensions. It’s part of the process of getting old, customising things less and less. Despite that, I’ve accumulated seven extensions.

(The most unusual and interesting one lurking in there is Churnalism, which tries to tell you whose press release each newspaper story is copied and pasted from)

As you can see, this morning I disabled them all.

Chrome extensions

Why?

I noticed that to load my list of datasets on ScraperWiki, a relatively complicated but in modern days not untypical SaaS application, was taking round about 2 seconds (that’s the DOMContentLoaded number in the Network tab of Chrome’s developer tools – actually I use Chromium on a Mac, if that matters).

When I disable all the extensions it takes only 1.5 seconds.

Timing to load page in Chrome

I briefly tried a binary search to find the culprit. It turned out there wasn’t a specific one, just lots of plugins scanning the DOM with lists of URLs to remove privacy bugs or block adverts. Each one eating up a small sounding but incrementally vast change. Further and proper statistical study required.

I must have gradually installed them in a fit of solidarity for privacy software makers post-Snowden. My previous policy was to try to experience the web in the way that non experts did, so I would be forced to either make it more usable, or at least all suffer together.

Now I’ve turned off all the plugins, the whole web feels fresh and fast again.

Skills you need to get product/market fit, all in a line

When delivering a new product or service under conditions of extreme uncertainty, you need a range of skills.

To help understand what skills are needed in a startup team, I’ve found it useful to think of people’s skills as being spread out along this line.

Line of startup skillsConsider people you know, and place them on the line. e.g. A UX designer who can also code Javascript is maybe somewhere towards the technical end of product.

1. Technical

This is someone who understands the substance the product is made of in great detail, and can work it to do what is needed. So for software, a computer geek.

2. Product

Deciding what the product should do, by both understanding what is possible with the technology, and what is needed by users. A product manager.

3. Market

Classifying who customers are, what inspires them, how to find them. Using whatever methods to help bring them to the product they want. A marketeer.

4. Sales

Filling that last gap between a product and its users. Cold calls, hustles, pesters. Whatever it takes to bridge two organizations. A salesperson.

Product/market fit boundary

Some lucky (skilful!) people sit right in the middle. They have a balance of product-side and market-side skills, and as a result often seem preternaturally good at crafting organizations.

You can train yourself to get nearer to that place – sales people can learn more product design, geeks can learn about marketing. And of course, you can join forces with people with complementary skills.

Why put all this on a line?

Two main reasons.

1. People’s skills are more likely to be near some point on the line. It’s common to find technical people with some product skills, rare for them to have full on sales skills. It’s helpful in terms of spotting balance – if your technical person is very extreme, it might help to have a product person who is over towards the market side.

2. There are analogies (homomorphisms, if you know your maths) between the roles, as if the product/market fit boundary were an Alice in Wonderland mirror.

Alice's mirror

  • Hacker (doing all to make tech work) is the mirror of Hustler (doing all to make a deal work)
  • Developers claim they have time to code everything, Salesmen claim they have time to sell to everyone
  • Geeks tell you too much about particular tech, Salesmen tell you too much about particular deals
  • Making (what product people do) in the mirror is Meaning (what marketing people do)

(I suspect these homomorphisms are quite deep, see my post on product and market being the same thing for a taste of why that might be)

I’ve found such analogies help me understand both the vital importance, and the weaknesses, of those different from me. It emphasises that to create a viable new product or service, you need all four skills working in unison.

In a world where a key limiting factor in the creation of new businesses is a lack of social ties between product-side and market-side people, it’s a start.

How we used email as a customer support system at mySociety

Customers

You’ve seen it. The red eyes… An ennui for life…

The drained sadness of someone who has been lost for weeks in a customer support ticket tracking system.

At mySociety (the awesome Internet democracy charity I was on the founding team of) we tried using Request Tracker for a while, and quickly fled.

We could flee, because we had the comfort of a simple email based system to return to.

It worked like this:

a) All support or feedback email comes in to a particular address (we used team@ – for example team@theyworkforyou.com for the site TheyWorkForYou)

b) That address is set up as an alias (like a group in Google Apps) to transparently forward mail to everyone working on that product.

c) Everyone filters their email, so those support emails all go into a special support folder.

d) When replying to customers, always use reply to all. This is so replies go into everyone else’s support folder, both so they have a record, and so they know the customer has been replied to.

e) When either you or someone else have fully dealt with a mail or a thread, archive it. Otherwise, leave it in the support folder.

f) Be really disciplined about this. Anything in the support folder represents a customer who isn’t satisified.

g) Make sure at least one person on the team goes and looks at slightly older, harder messages, and bullies appropriate people into resolving them one way or the other.

This particularly works well early on in a product, when there is relatively little support. It’s particularly important then that everyone working on the product lives and breathes the customers. Even just seeing the emails go past with other people answering them can help with that.

I’ve used it to manage support for probably a dozen web products in total. It’s surprisingly robust…

It worked well (and still does as far as I know!) with half a dozen volunteers on WhatDoTheyKnow. It worked fine even when we were launching the Downing Street petitions system, with millions of users and front page newspaper stories (Matthew had many long sessions of email answering – and that’s a good thing!). We’ve just been looking at customer support systems for ScraperWiki, the startup that I run.

The main products (like ZenDesk) seem to be aimed, both in price and functionality, at larger, more corporate organisations than we are. They look overly complicated, we really only need something as featured as the Github Issues tracker.

I don’t know what we’ll move to using yet – that’s partly why I’m writing up a description of mySociety’s email based system.

It could perhaps be simplified by using a shared GMail account. Careful use of labels and folders would also make it more powerful.

It would need some training and discipline. Everyone handles their personal email differently. When it is customer support requests, it needs more discipline than just chilling out in the tao of a flow of passing messages.

This is natural to people who have the horrible habit of using email as a todo list, less so to anyone else.

Comments please! What questions do you have about the above system?

And can you recommend a lightweight customer support tracker?