Announcing “Mindful Media Club”

I’m starting a meetup / online commnunity to share tips on skillful use of social media.

Things like browser plugins that let you customise YouTube, tips on settings, social practices like how to form a healthy active WhatsApp group for a particular purpose. And so on.

Sign up to the first event if you’re in London and you’re interested!

Wherever you are, there’s a Discord. The aim is to make an online guide with all the tips in, depending how it goes.

Please do ask questions / give me feedback and ideas!

A week making something each day

As a challenge, last week I made five things, one each day. Each had to be finished in some sense, and preferably published. This is what I made and what I learnt!

Monday – Godot game

My goal was to learn Godot enough to write some kind of video game and publish it, all in one day. Incredibly this was fairly straightforward. Things I learnt:

  • This video tutorial and the text one in the main docs are both great starting places.
  • Physics engines are really good and easy to use compared to when I last coded games with them which was in 2002.
  • ⁠⁠Open source game engines are genuinely very mature
  • ⁠⁠It’s satisfying making something that just runs locally and is very visual

Itch.io is extremely generous with letting you just make a page for your game. It’s pretty liberating – no servers or DNS to think about like with a website, and no complicated signing mechanism like an iOS app. Although, I’m not confident the Windows build I made worked, only the Mac one…

I had to compromise quite a lot on the game, it isn’t great. But it has a character, and objects you manipulate, and a goal, and one level.

Tuesday – LLM solver

For a job interview, I wrote a program using an LLM to perform an algorithmic task, one with some aesthetic judgement. For obvious reasons I can’t say more about it! The new thing here was using OpenAI functions, which I haven’t done before. Things I learnt / remembered:

  • OpenAI function callbacks are clunky, as still not guaranteed in structure
  • Rate limits on a personal OpenAI account are quite low and easy to hit
  • It’s fun making things with LLMs – it feels powerful and surprising, and fresh

Wednesday – Browser extension

I started coding Instalamb last year, an extension to customise Instagram, for example by removing recommendations. Today my goal was to finish off a first version, package it, and submit it to the Mozilla addons site. Things I learnt:

  • When modifying the DOM of dynamic React applications, it is best to only alter the styling on individual elements. Removing elements creates strange crashes. I moved a few off the screen to an absolute position, or hid them behind other things, accordingly.
  • At least for Firefox, extension packaging is crazy simple. You just zip up the manifest and Javascript files and so on. That’s it. It makes publishing to other platforms a bit embarrassing. My first version was just 2219 bytes long.
  • It’s very hard to manipulate infinite scrolling. The main Instagram feed has a small number of post DOM entries which it rotates through. Breaking the whole page, or it endlessly loading invisible posts, were common failure cases of manipulating this.

Get in touch if you want to try it out – it isn’t quite at public release stage yet. A couple of users who want to customise Instagram in some way would be great.

Thursday – Mind sampler

I’m a big fan of Hurlburt’s Descriptive Experience Sampling, which involves randomly reminding someone to take note of how they were thinking just before a random alarm goes off. My goal was to write a mobile app to help me do this for myself. Things I learnt:

  • Local notification alarm APIs either don’t exist or are different for iOS and Android in both Flutter and React Native.
  • After spending many hours trying and failing to make a mobile app, I realised I could just use Tasker on Android to do it. See my previous post where I used Tasker to control my smart thermostat.
  • In Tasker, it’s not too hard to schedule a task every 2 hours in the day which generates a random variable from 0 to 119 minutes and waits for it – see someone’s post about this.
  • That can then trigger a notification, with a button to open a text file. The sound and icon can be customised. It’s important to put a mime type in the file open command so it finds the app without an extra prompt.

This looks like it is going to work, so hopefully I’ll now find out what percentage of my time I’m paying attention to my senses, what percentage I’m doing unsymbolised conceptual thinking and so on. I’ll report back somehow.

Friday – Vox pop video

My podcast “Imagine an apple“, about what it’s like to be in our different minds, has just got going. I’ve always wanted to interview a bunch of people about how they use their imagination and compare them, and also to use video so in the end can animate what they see too.

Today I did a prototype, filming people on the streets of London and editing it into a finished video. Lessons learnt:

  • It feels hard getting strangers to talk to you in London, but when you do it they love it.
  • Phone battery drains pretty quickly for a relatively short amount of video, so plan for that.
  • Editing in iMovie is good enough, but I’d look for something else next time. For example, it doesn’t really seem designed to do portrait video, which is a bizarre limit these days.
  • I could spend forever learning to get better at video filming and editing – it’s lovely getting a feel for why in practice.

I think with enough footage to cherrypick the good surprising bits, and careful editing, this format could work really well. Would need to be denser. Jump cuts contrasting people saying different things about the same aspect of their imaginations worked best.

You can watch the video over here, but it is just a prototype! Do give me feedback or ideas.

Overall lessons

  • Doing one thing every day is pretty tiring, like an endless loop of hackdays. However, the pressure and creative diversity of doing that made it worth it.
  • Practically the projects accumulate. You can end up with several things to follow up on – in my case fixing a couple of things in Instalamb, and analysing all the data I’m sampling about my own mind.
  • It’s pretty remarkable that the Internet, software and AI combined let me get all the above done in one week. Nothing would be surprising in amount if doing it all the time – but in each case I was doing something very new to me.

Netatmo smart thermostat! How much gas I saved last winter, and how to automate it to turn off when out (not using IFTTT)

This post has two sections, one about gas prices and energy savings, the other about Android automation and the Netatmo API.

1. What a smart thermostat is like and how much money I saved

Last year, with gas prices going up, I decided to get smart radiator valves. I’d thought the saving from these would be by only heating rooms when I’m in them. I was lazy about going round and adjusting the manual radiator valves several times a day!

Having had them for over a year, I now think the saving comes from the thermostat being modulating. This means it adjusts the boiler strength continuously, so it works more efficiently (the old thermostat could only turn the boiler on or off).

My new heating also feels really good. The rooms have a more balanced and even temperature. I tried having the temperature rocket down to below 10°C in unused rooms at night, but it was less effective in both energy use and pleasure than keeping it at about 15°C as a minimum.

I’ve got the Netatmo thermostat. It’s great. I particularly like that it has some kind of eink-style powerless-when-not-changing LEDs, that show the current temperature all the time. Overall the hardware, fitting it myself, and the app are all quite good. The software is a bit ropey – no web application (still), and a major missing feature (see next section).

It saved me money! It’ll take 2.5 years to pay back the £480 total cost of the radiator thermostats. That’s if gas prices are as high as last winter, and not charging myself interest. Not too shabby. Have a look at the rough spreadsheet if you’re interested in details.

2. How to automate it to turn off when you go out

Unfortunately, one downside hit at the start of this year. Netatmo suspended their IFTTT integration. This means there is no official way to make the heating turn off when you go out, and turn on when you get back home. This is quite important for saving energy!

I’ve hacked together my own method, using my new Fairphone. It is very bespoke, and involves programming. However, in these days of AI coding assitants, maybe more people than ever can get this sort of thing working.

Netatmo’s API is fantastic, and can still set my thermostat to away / at home modes. So I did the following steps – you’ll need an Android phone:

  1. Wrote a Python script netatmo-fai.py which can, from the command line, set the mode on the thermostat. There are instructions in the script – you need to register an app as a developer at Netatmo, and make an initial token on their website.
  2. Installed the incredible Termux which is a Linux distribution that runs entirely inside an Android app, without root. Copied the script over (you can use git to do that) and got it working inside Android.
  3. Installed the power-user Tasker app and crucially Termux:Tasker which connects it to Termux. Tasker is a bit like iOS’s Shortcuts feature, only both more powerful and harder to configure.
  4. Set up a profile based on the “Wifi Connected” status. I called it “Home – Wifi”, and set it to run when connected to the SSID (name) of my home Wi-Fi network. I found using Wi-Fi events for this is very reliable, and doesn’t need a foreground notification window (see below).
  5. Create a “Tasker Function” task which runs the Python script with appropriate parameters to turn on the heating. Set the profile to execute that function.
  6. Create the opposite task which turns off the heating. Long press the profile and add an “Exit Task” to run it.

Now you can test it – by turning Wi-Fi on and off! Be aware that if your router breaks, your heating will turn off… If that isn’t suitable for your phone / Wi-Fi setup, it works well with a Location profile, you just can’t turn off the permanent notification.

Some secondary tips:

  • Wrap the Python script in a shell script so you can log its output to a file. It’s hard to debug otherwise.
  • Install Termux:API and then add error handling to the shell script so it triggers an Android notification when the Python script fails.
  • “Wifi Connected” seems to work fine without the permanent Tasker notification. I turned off the Monitor notification for Tasker in the normal Android settings.

If you’re trying to get this to work, and have questions I might be able to answer, please leave a comment below!

Hope you have a warm and fulfilling winter.

The little differences between Android and iOS in 2023

These days I alternate mobile operating system. Partly because as an app-making professional I strongly feel I need to understand both, and partly because it slightly irritates people who are die-hard fans of one or the other.

I don’t particularly love either – a plague on both their houses. I’d rather we all used a fully open operating system, or there was a lot more competition and a standard application platform. Still, they work, and both have lots of delights.

This time I jumped in order to have a Fairphone 5 which came out a month or so ago. It’s lovely in terms of hardware – conflict free materials, fair pay, all parts replacable with just a screwdriver (yes, even the USB port), feels and works beautifully. Highly recommended!

Of course all this fair hardware is only possible with Android, so chalk one advantage up to an at least partially open ecosystem.

This time I took notes on everything interesting I noticed while switching from iOS to Android. It is deliberately rough notes – I haven’t researched each one in detail. They’re impressionistic. Every sentence is in my instinctive opinion.

Winner for each section is represented by 🍎, 🤖 or neutral ⚖️.

Installation ⚖️

  • Google/Fairphone screens felt more slick to me than Apple setup screens
  • Android didn’t offer a QR code scan for Wifi password – I had to type it in
  • It got me to plug my old iPhone into the Fairphone via a cable, and tried to copy various things across including WhatsApp message history, but it didn’t work for me. I didn’t need this so didn’t try too hard.
  • Prompts me to choose my search engine – lovely, I guess Google are forced to do this? I picked DuckDuckGo
  • Face unlock was very very fast and easy to register, and seems to work really well. Presumably though it is less secure than on iOS due to missing imaging hardware – all the banking apps and so on use the fingerprint recogniser on the power button, which works really well too.

UX Details 🤖

  • Overall the user interface feels faster and slicker than my old iPhone 11
  • Actions in notifications feel more comprehensive and easier to use on Android
  • Android routinely has separate settings for different kinds of notification in one app, so you can configure them separately – if iOS does this I never noticed it, or apps weren’t adopting it as frequently
  • Timer has more features, including a lovely one to make the sound come in gradually. And of course multiple timers.
  • Pull down shade keeps audio players in it for longer, and you can swipe between them, e.g. music vs podcasts. Was frustrating how quickly this would disappear on iOS. When I’d just paused to go to an appointment, it was gone by the time I came out.
  • Auto rotation is considerably better – when you rotate the phone sideways a little icon appears and you tap it to make the phone switch landscape/portrait. This is just much better for me than having a lock/unlock setting, which you have to unlock on the pull down shade, rotate phone, and then when you’re done lock again.
  • Can choose the default mapping software (e.g. if you open a map link in one app). Wild that capitalism gets Apple to not offer this. Not clear why Google offers it!
  • Bedtime mode has a cute option – my phone goes black and white from 11pm to 7am.
  • SMS app has spam detection, and it works.
  • When something is annoying, there is more likely to be a way to fiddle with it on Android. As an example, for some reason it showed an NFC icon in the top bar by default, which is useless as there is no reason to turn NFC off. In the end I switched to developer mode and typed things like “adb shell settings get secure icon_blacklist” and turned off the icon permanently.

Voice Assistant 🍎

  • Apple’s better privacy encouraged me to start using Siri when I got my iPhone. Mainly for my own professional development in the era of AI, I ignored this problem and for the first time used Google Assistant on my Fairphone.
  • It can’t listen in the background when the phone is off. This is a hardware limitation, only top end Android phones have that feature. At first this annoyed me, but now I’ve just stopped using voice assistants as they aren’t that good. It is set to listen while the power is on – so worse case I tap the power button once then talk to it.
  • When I first started using it it felt fast, but now it seems often really slow, I’ve no idea why.
  • It has the world’s most awful branded wake word. No, I don’t want to name a trillion dollar corporation everytime I use a user interface.
  • It only uses Google Calendar, not my local synced calendar. I mean what, seriously?
  • In theory it lets you enable access to personal things, and detect your voice. Because of the above problems I haven’t played with this much.
  • Overall I’m very disappointed – I thought the company that wrote the first AI transformers paper 6 years ago would have a better voice assistant in 2023. I guess I’ll have to buy whatever Jonny Ive is designing with OpenAI, or some startup’s pin badge, or hack my hearing aids, or just leave ChatGPT Plus on like a phone call.

Email / Calendar / Contacts / Phone / Browser 🤖

  • To my surprise, much more choice of email clients for me on Android. On iOS most of them required expensive subscriptions and funnelled all your email via their server, so I just used the standard client. It has a very dated interface. This time I picked the open source K-9 Mail which is both better than Apple’s offering, and I can make a PR to improve it if I like.
  • Similarly, more choice of calendar app. My old favourite aCalendar+ means I have a weekly view that I like again (one page, 4 days on the left, 3 days on the right), something I couldn’t find on iOS at all.
  • The contacts and phone apps have a better UI on Android. Partly this is just material design is a bit more thought through and clear than Apple’s strange blue outlines for buttons. Mainly it is because the worst team at Apple works on the phone app (example 1, example 2). For example, I had to search for how to reject a call when I first got my iPhone. While it was angrily ringing at me. They’re not trying, honestly.
  • Apple monopolistically don’t let you change browser on iPhones – all the other browsers are mere skins around the same one rendering/Javascript engine. On Android I’m using actual Firefox again, with plugins! For some reason only some plugins are allowed right now, they’re bringing back the rest. Oh Mozilla, what are you doing!

Git / Files Support ⚖️

  • I keep my personal documents in a git repository. One of my favourite apps on iOS is Working Copy, an excellent git client. I even scripted it with Shortcuts to auto-commit everything whenever I plugged in my phone. There’s nothing like Working Copy on Android. One thing iPhones excel at is software designed mainly for tablet users, as due to strategic Google errors there isn’t the same market for Android tablets as for iPads.
  • So what do I do on Android? To my shock the answer is to run an entire Linux command line environment inside an app. You can install any package. This is called Termux and it is a wonder. I run the same script as on desktop to merge / add / commit / push all my documents automatically. It runs in the background on my phone. I was sure this would either not run reliably or drain my battery. It just works.
  • I arranged it so other apps can access those files in git – much more flexible permissions system than iOS, but still controllable. Unfortunately I haven’t found a great text editor on Android for my needs – just a decent one.

Syncing ⚖️

  • I keep my own photos on my own server, on iOS I would sync new camera photos over SSH with Photosync. This was triggered once again when the power is plugged in – frustratingly, the only time you can automate something.
  • For reasons that escape me now, Photosync just didn’t work on Android. So I use two way command line file syncing tool unison, configured as part of the Termux sessions I describe above. It’s great! And syncs my photos frequently.
  • I use Fastmail (highly recommend) for my email, calendar and address book. Setting up syncing for that on iOS was a breeze – there’s a standard config file format which Fastmail provide, and it happens in a second. On Android… The assumption is you’re using GMail. So I had to buy a CardDAV/CalDAV sync app, and manually copy and paste all the server names and passwords over. Yeuch.

Miscellaneous ⚖️

  • Google Pay works, it’s as good as Apple Pay. I really like that you don’t have to double click the power button to use it – just unlock the phone and hold the NFC reader in the right place and it pays, no other action required. Fairphone has a hardware downside here – the NFC reader is in the middle of the phone, so it is harder to activate than if it was at the end like a wand.
  • Google Fit measures cycling automatically. I cycle casually as part of my day to day life, and like to measure the WHO heart points I naturally take each day. On iOS, it kinda did measure cycling as if it was a similar amount of walking, and I tested it and it came out fine. But Google Fit does this better. It knows it is cycling. Otherwise the apps are similar – Apple Health is more flexible if anything. Google Fit is more of a taskmaster demanding I don’t just walk but walk quickly.
  • Overcast is one of the best apps there is, an indy podcasting iOS app which I got really used to. Luckily plenty of competitors have cloned its key features of speed adjustment and automatic removal of silence. I’ve gone with Pocket Casts in the end.

Home / Lock / Desktop Customisation 🤖

  • Android has gone backwards! For some reason these days (2023) it has little to no lock screen customisation. This just as iOS has gained some of that. I don’t really use my lock screen – on both devices it face unlocks before I can anyway. And on iOS it is just kinda annoying you then have to swipe to get to the last app you were on.
  • In contrast, Android has a plethora of home screen apps. These let you do shocking things like move the icons where you like on the page. Radical I know! It’s wild to me that iOS doesn’t allow this. Android even has a standard for customising app icons, with multiple cheap packs to get your phone looking how you want. Like crayon icons or neon icons. It’s a joy.
  • It gets better. A few weeks in to using my Fairphone I found the delightful, minimalist Niagra Launcher. It’s incredibly well polished, a thought through UI, utterly fresh and new. I feel like I’m choosing how I use my phone, rather than it choosing me. My home page is the screenshot to the right, get me to show it to you.
  • A final wild card for Android… Turns out you can just plug it into a monitor. I took the USB-C cable I use with my laptop, and plugged it into my phone. Ping! Android switches to a desktop mode, where the apps are all windows, and there’s a start menu. And I can type into the Termux terminal window. This was very useful – mainly for setting up terminal commands. It’s a bit unloved, looks like nobody has done much with this desktop for a few years. But it works, and is unlike anything an iPhone can do.

Conclusion

Smart phones are a commodity now. It doesn’t matter which you have. And yet, they are different, and you don’t have to use the same one forever.

Every stereotype I had about the two mobile operating systems was wrong – it’s Android that has better user experience polish, and iOS that has the better AI voice assistant.

I like my Fairphone 5. It feels fresh for my mind to learn something new.

What is high-quality about the data that trained generative AI?

Our brains have 100 trillion connections. Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does.

Geoffrey Hinton, deep learning pioneer

The recent surge in interest in generative AI was sparked by neural networks trained on high quality public, human culture.

Their use of this culture is extremely focussed – they only saw good quality inputs, and only saw each input once (see the paper One Epoch Is All You Need for why). If you show them lots of bad quality stuff, they’re not adaptive enough to tell and ignore it.

So what exactly makes the data they’re trained on “high quality”? I decided to dig into two examples — GPT-3 for text and Stable Diffusion for images — to find out. (I’m not an expert at training networks, so please leave comments about anything you see that I get wrong)

GPT-3 — good articles that Reddit linked to

GPT-3 feels a bit like it saw all the internet as input — actually they started by training another curator network to pick out the “high quality” parts of the internet (see 2.2 Training Dataset and Appendix A in the GPT-3 paper).

What is high quality? Well, the curator network for GPT-3 was taught its concept of high quality by looking at an internet corpus called WebText. That was made by grabbing a copy of all of Reddit, and looking at the outbound links in moderately upvoted posts/comments.

(Details: Read 2.2 Training Dataset in the GPT-2 paper  — that says “at least 3 karma”, which doesn’t make clear sense to me. As far as I can tell it is Reddit users who have karma, not links or posts. OpenWebText2, a recreation of this dataset, used URLs from submissions which have a score of 3 or higher — see their documentation — which seems a good assumption for what GPT-3 did. Possibly they took links from posts by users with a karma greater than 3. Posts are also separate from comments, and users have a separate karma score for each.)

GPT-3 was also given lots of books and Wikipedia — but most (82%) of what it took in was the good pages that Reddit linked to, and other pages that “feel” similarly high quality.

Stable Diffusion — pretty competition-winning images

Once again, this begins with a copy of the whole Internet, filtering out all images that have alt-text attributes in the HTML using them. This is already a filter — well made sites which care about accessibility will have more and better alt-text. A project called LAION then classifies those images
using AI so they can be filtered by language, resolution, chance of having a watermark, and their aesthetic score.

Stable Diffusion’s training is in a complicated series of checkpoints, which starts off with a bit of general LAION data, but ends up mostly trained on highly aesthetic images. This is much the same as for GPT-3 — a curator AI (the LAION-Aesthetic Predictor V2) learns to judge high quality images. Those images from LAION that the predictor scores 5 or higher were used to train Stable Diffusion.

But what does an aeshetic score of 5 mean? For a quick feel, this page shows increasingly aesthetic buckets of images from the full LAION dataset as you go down the page. Digging into the training data used to make LAION, there are two main sources:

1. Simulacra Aesthetic Captions – manually rated images made from an earlier (non-aesthetically trained) checkpoint of Stable Diffusion. Humans, I assume from the Stable Diffusion community, rated a couple of hundred thousand images by hand.

2. Aesthetic Visual Analysis – this is a download of all the images from DPChallenge, a 20 year old digital photography competition site. A few times a week for decades this has competitions like “photograph bananas, any composition, must have at least one banana”. While they only have tens of entries to each competition these days, they used to have hundreds.

There’s a bit more complexity — a small number of logos specially aesthetically rated were thrown in, I think to improve typography and font quality. You can browse all the images used to train Stable Diffusion (browser by Andy and Simon, those links to their blog posts about it).

Conclusion

The notable common properties are:

  1. Each piece of training data is only shown once to the AI during training
  2. Both have a core dataset with some human-curated metric for “high quality”
  3. Both extended that core dataset by training a curator AI to pick out similar high quality items

It’s quite an odd situation, especially given the emergent properties of reasoning, coding and so on that these kinds of models have. The training mechanism isn’t particularly smart, the smart stuff emerges inside the neural networks so trained.

The AIs have learnt to think extremely well about a large chunk of human knowledge in symbolic form. Right now, they are heavily reliant on humans — every upvote you made on Reddit, and an old time niche digital photography competition site.

This kind of rosy yellow glow in my head

A book review of “Describing Inner Experience? Proponent Meets Skeptic” by Russell T. Hurlburt and Eric Schwitzgebel (2007)

A couple of years ago I realised I didn’t have a visual imagination.

This was ultimately quite inspiring – it’s led me to ask maybe a hundred people about their own inner lives. The answers so varied, I’m left in wonder at this hidden world that we barely talk about.

My favourite source about this is “The pheneomena of inner experience” (a paper by Heavy, Hurlburt 2008). It uses a method (Descriptive Experience Sampling, or DES) to randomly beep a bunch of volunteers in their every day lives, and get them to then capture their current mental phenomena.

The kicker is this beautiful table 2. It lists the top 5 most common forms of inner experience. For each one there were participants who never experience it and other participants who experience it more than 75% of the time.

Just pause a moment to absorb that.

Take something that you feel is fundamental to life, to your experience of being conscious. For example, that you have an inner voice, or that you’re aware of your senses, or that you imagine visual imagery. For that thing, which you might be doing >75% of the time!, there are substantial numbers of otherwise ordinary people who never do it at all.

We are not remotely in the same world.

And so (via mention in a New Yorker article recommended by Anna), to “Describing Inner Experience? Proponent Meets Skeptic“, a 2007 book also by DES proponent Hurlburt, with sceptic Schwitzgebel.

It’s framing account is a dispute between the psychologist and philosopher authors, but honestly that is a bit of a side show. It feels like it mainly consists of Schwitzgebel assuming other people have inner experiences like him, which they don’t. Hurlburt is very patient, and the discussions reveal a lot.

No, the important part is the individual experiences of their subject, Melanie. She’s a philosophy and psychology graduate, and you get the sense that in many ways she knows more than the older men writing the book.

The book consists of detailed dialogues in which Melanie recounts her experiences at the moment of a random beep, and Hurlburt quizes her openly and intelligently to unpack and improve the quality of that description.

Melanie’s world is not like mine. It’s not so different, but it is not the same. Just as in user testing, the first sample is worth a fortune compared to no samples at all. These specific, concrete details of how someone else experiences being alive are inspiring and enlivening. They made reading the book worthwhile.

I’ll give two striking examples.

On the first day, Melanie sees emotion as a colour. She’s laughing at something to do with the documentation for a piece of furniture she’s assembling. Along with a verbal thought that it is funny, she gets a “kind of rosy yellow glow” in her head, all over like a “wash of color”.

Melanie says “It was a feeling that was very familiar to me, or I guess, the sight, you could say, of this colour that is really familiar to me and is one that I commonly associate with laughing at a joke or something that involves humour”.

I struggle to associate emotion, as is fashionable, with part of my body. It feels conceptual to me, simply raw emotion. To get a colour for an emotion is even more striking to me.

Schwitzgebel doubts she really sees the colour, mainly because he never does and because the literature never mentions it. Even when, in the end section, Schwitzgebel digs into that literature I’m unconvinced. Unlike DES, previous methods never attempt to get decent, normal accounts of inner experience.

They seem to mainly consist of philosophers who assume everyone has the same experience as them, and introspect in their arm-chairs using ineffective techniques… Or of more low level set experiments trying to catch subtle details like the experience of the third soft resonance tone when playing two notes.

For some other similar accounts of synaesthetic emotion, this site assembles lots of reports from Reddit.

Much later, Melanie has an echo of her inner voice. She’s tidying up some dead flower petals in the sink, and thinks “They lasted for a nice long time” in her inner speech voice. That’s called “articulatory” – it is the inner voice that feels like you’re almost speaking.

Then, overlapping with that and repeating on top of itself several times like an echo, she inner hears her own voice saying “nice long time” “nice long time” “nice long time”…

Inner hearing is a voice in your mind that you experience like you’re listening to it, and has things like tone and accent. It could be someone else’s voice or your own. I don’t have much if any inner hearing, as far as I’ve noticed, so Malanie’s echo is also very striking. A slightly different world.

Schwitzgebel doubts this one too. At first that it happened at all, and then focussing on the timing. Melanie reports the echo happening in a very small amount of real time. She felt all of the words in full, yet little actual time passed.

Doubting this seems bizarre and excessive to me. We know for sure from examining the strategic planning our minds must be doing, that we can run reasonably high fidelity simulations of almost anything very rapidly. We don’t experience them, and we don’t know what form they’re in, but a compressed detailed modelling of some kind must be happening rapidly.

In dreams time is often odd, they can feel like a long time when it’s only ten minutes since you pressed the sleep alarm. My instinct is the opposite of Schwitzgebel – of course time isn’t always real within our conscious experience! So I find it difficult to take him seriously.

The book ends with summary notes from each author reflecting and responding. Neither change their view. Hurlburt is happy with his life’s work developing DES, and Schwitzgebel is happy with his life’s work beind cynical about what we know about our inner experiences.

They bring in a bunch of interesting research history. At the end of the C19 there was a critical argument between Titchener, who believed all thought consisted primarily of images, and Würzburg who believed in intangible mental activities. They both did research which apparently crashed and collapsed, and the quick summary is that everyone then ran to behaviouralism and stopped thinking about inner experience.

I’m being an armchair introspector, which the book dislikes, but I really do think I don’t have very much visual imagery. It’s barely tangible, and usually just spatial without colour or texture. This makes it hard for me to take Titchener, or any of his research, or anyone who references him, very seriously. Especially now there are MRI scans to show there really are radical differences in visual imagination.

Another fun reference was to Flavell, who in the 1990s researched the inner experiences of 5-year olds. It came out that they aren’t aware of their thoughts, even quite socially visible and important ones that their behaviour showed they were having. Flavell concluded that they must have been thinking and therefore their reports and research is wrong. When actually, perhaps 5-year olds have a less developed form of consciousness, and “just do” more without specific conceptual, visual or verbal awareness. This definitely feels like it needs more investigation, and we’d learn a lot.

Hurlburt ends by describing the difference between research that aims to explore and discover, vs research that tries to prove a theory. He says that introspection philosophy and psychology keep trying to jump ahead and test theories. I agree with him that it is too soon to do that, we don’t understand how the mind works at all.

We seem to be missing basic information about how different our inner worlds are from each other. We should use tools like DES, and develop more like it, and get many more people to introspect. We can grow our language and capabilities as a society.

Then perhaps we’ll have the tools and information to make theories, and understand more about that mystical experience of being a conscious being.

Art I enjoyed in 2022 – top eight

To my surprise this list is television heavy – I didn’t find any incredible new board games, and I was disappointed in most video games. It’s somewhat in order – my favourite is roughly last.

Thanks everyone who recommended these to me – you know who you are! I’m not going to link to where to watch things – for TV and films I use JustWatch to find a suitable source.

Community – Seasons 1-3

Rick and Morty is a dense, witty yet also often smart, hard science-fiction, at least for the first couple of seasons. Lots of people always recommended Dan Harman’s earlier hit, Community, whose premise set in a US local college was never very appealing.

It’s brilliant – each bundle of twenty happy minutes is laugh out loud funny, while at the same time building up the characters, universe and connection. That is even before you get to the clever-clever high concept episodes, often based on films.

Not really worth watching after season 3 as Dan Harman was fired as show runner. He comes back later like Steve Jobs, but alas doesn’t create the iPhone.

Undone – Season 2

It seemed hard to make a second season of this beautiful, rotoscoped, ambiguous story about reality and the mind (article on the creator Kate Purdy’s own schizophrenia – she made Bojack Horseman), yet they managed it.

The trick of having warm, rich, real acting, cast into a cartoon form, so that visual memory and hallucination feel real, continues to work (video on how they do it).

I fell again for the emotions of Alma’s family, watching the rainbow song later on repeat. The seventh episode had me bawling, howling about the grandmother’s story, and subconscious connections to my own family.

It blurs fantasy and who we really are, in a way that is utterly relevant and bright.

The Hidden Life of Trees – Peter Wohlleben

Each snappy chapter is an astonishing insight into the complex, social and diverse way trees live.

At first it is simple things – that leaves partly vanish in winter to reduce the surface area to storms. That some species are pioneers in empty ground, others work only in exsiting forests.

Later it gets more shocking – individual trees vary genetically between each other as much as species of animals vary between each other. Our human heartrate measurably changes according to the health of a forest, probably reading the chemicals the trees signal to each other with.

There is a whole world here, hitherto hidden from me, and its scientific detail barely studied and understood.

No need for an alien planet, look closer at ours.

Better Call Saul – Season 6

Somehow, this spin-off ended up being better than Breaking Bad. The first season didn’t seem much when I first watched it, but by season three the reviews were so good I went back.

It’s now one of my favourite shows ever. This final season has more astounding cinematography, and a cathartic and earned ending.

The subtle detail in expressions, tone and mood of Jim and Kim’s relationship have been the heart of the show for years.

There’s a peacefulness, and human and adultness to it. A few years ago it was extremely valuable to me – the only art that truly connected to the complexity of emotions and depth of relationships in my life.

Breaking Bad – Seasons 1-5

After watching Better Call Saul, I felt parched for high quality television, and decided to rewatch this ten years after it finished. I don’t normally do this at all.

Incredibly well made – beautiful and interesting cinematography, compelling acting, plotwise just so so clever. Everything ties up well and resonates well. It doesn’t have a single bad episode.

Even aspects I didn’t like the first time – notably Marie’s kleptomania – now that I understand mental health better, were utterly on point.

This show’s themes don’t especially resonate to me personally, however its quality is ludicriously high, and it is engaging and authentic. It deserves every praise.

Dirty Dancing – Secret Cinema

A friend unexpectedly took a group of us to see this classic 80s film at a kind of festival in a park in the west of London. I hadn’t seen the film before!

The whole experience was delightful. Bars and a fun fair in 60s upper New York holiday camp style, including feeling like you illicitly got to an actual back stagehands party. Dancing!

Then the film itself turns out to be really really joyous, full of energy and love. Morals and ethics that are subtle and powerful – who can refuse a main character who pours water on somebody who reads the Fountainhead! Dancing that was hot without cliché, so confident it is simply powerful.

Best of all – on entry to the festival everyone’s mobile phone was put in a locked bag and given back to us so we couldn’t use it. This added a tangible presence to the whole experience. I hope more events use this!

Rise (En Corps) – Hofesh Schechter

Not that I’ve many to compare, but Hofesh Shechter is by far my favourite dance troupe. Their mailing list led me to go and see this at the Institut Français’s cinema in London.

A beautiful film told in a straightforward yet neat way – suitable jumps in time and setting which are clear and add to the feeling.

I cried when the woman running the retreat centre in Brittany conveyed to the injured protagonist classical dancer how when you’ve fallen it lets you raise in a new way. Directly personal – I’m stuck in a local low due to fear of being injured, just as she was.

Care and support from her sisters and her friends are shown lovingly, such as introducing her to just the new friend she needed at just the right moment.

Then Hofesh’s company – his style of dance takes her in her weakened state, it doesn’t just accept but relies on her not hiding that, not trying to make perfection. This warmed me to the core.

I can’t be perfect and I shouldn’t be. I should live each of my lives that I have.

The world of Stonehenge – British Museum

I’m still a member here years later because the exhibitions are shockingly well curated. This one (exhibition tour video, book) wasn’t really about Stonehenge. It was about a Northern European civilisation that lasted a couple of thousand years and yet doesn’t really even have a name.

Many intricate carved stone balls, almost mathematical in form and regularity, that unwarned you would say were made last week.

Preserved wood randomly unrotted in peat for millenia, revealing glimpses of wood henges and cross-marsh walkways we will never know.

Gold mined in Wales making a gorgeous glimmering shoulder garment for a woman, her source of social power a mystery.

A peat grave with the items so tangible to you she is as real as a modern girl – her woven basket, her wooden earrings, her bear-skin coat, her valuable beads.

This civilisation had no written language and most of its treasures have dissolved away. It was clearly incredibly sophisticated, it is just all hidden from us. Fragments of information accidentally preserved, or forensically deduced by modern material origin tracing.

Spellbinding. I went twice.

On intuition’s relationship to rationality via language

A three year old draft blog post I just found. Feels worth publishing – the improvements in AI since then if anything make it clearer, and all the “right now” caveats justified.

It’s better to start by thinking of us as pattern matching devices first.

Not simple ones – such as modern deep learning AI that essentially does layered functions to measure correlation. Complex ones, that model causality in a sophisticated way we don’t remotely understand yet.

That’s intuition. Which is an odd word for it, that you’d only invent if you were overly focussed on language, not what we actually are.

So to language. One function of language is to attempt to describe what we pattern matched, our model of the world, to others. To influence them, to train them, to explain yourself to them.

If you’ve learnt a foreign language, or even just lots of varied words for similar concepts in your native language, you’ll know this is hazardous and inaccurate at best. No surprise – a few thousand words even in combination can’t cover conveying the exact logic of hundreds of trillions of synapses.

Rationality and science are attempts to improve our thoroughness and willpower to agree truth, and make explicit our working.

Sometimes this goes reasonably well – academic Maths rarely ends up with persistent mistakes. But it only does this by intense training of people with a very specific ability, and by picking the easy use cases. By definition, Maths is about the cases where logic prevails – and even then there’s Gödel’s theorem to confound that simplistic view.

But generally, it is going to be inaccurate.

You can’t work out everything with data or current AI. You can check you have made mistakes with it. You can feed your brain insights with it. If you’ve got loads of data and lots of resources and you do it really carefully, you can do random controlled trials (A/B testing) and at least be sure about the causal direction of what you learnt.

However hard you try, this won’t ever be a model as sophisticated as only a human mind’s intuition is right now. It’s difficult to use intuition though, because our minds can just as likely be wrong. Good cultural practices of training and validation can hone our minds and insight.

Once we’ve gained our truth, there is only the limited bandwidth of language to try to help others gain it too.

Blind-sided

It was a rushing, a burning, an all-things-are-change, a compulsion.

I was torn up for three days or was it two weeks, except the long moments when I just forgot. Intertially, suddenly remembering, half weeping, half positively reconstructing my own construction of who I am and why.

Everyone I talked to I would grill – wait, what do the sheep look like when you try to go to sleep by counting them? And when you read a book, how good quality are the faces of the people you imagine?

Everyone is blind-sided. Either grilling me back for hours trying to understand how I do anything without a mind’s eye. Or kind of bland to it, so not used to describing their inner lived experience, they’ll say what it is, but not quite get just how much it varies between people.

The one commonality – everyone, everyone having previously assumed that we all perceived / absorbed / processed, that our internal phenomenology of being a mind was the same.

It’s not.

A rush of others in the same situation as me pour through the aphantasia subreddit – “without imagination”, a word only coined, literally a concept only known in 2015, just six years ago. You can read the FAQ which has links to tests to take to help you understand what’s going on.

Or you can just ask everyone. Grill them. Compel them.

Think of an apple, what colour is it?

When you dream are the images like an old black and white TV, like just a sense of emotion and relation, or like a 4K film?

If you imagine a future event do you play it out as a video, do you see it from a first person or a third person, how long is the film, how does the camera move?

Can you voluntarily hallucinate a dragon coming out of the pavement in your main perceived world? (Warning note: This is unusual, and not being able to do this does not mean you’re aphantasic. Some people can do it, it’s called prophantasia)

When you recall a traumatic experience does it literally flash back as an image of the scene into your mind, and what’s that like – maybe a gif, what quality, how long, or do you just remember the feeling of pain?

If you’re navigating round a city, do you look at the map and remember it and bring it up on your second screen as you need it? Or do you remember the street you’re in and quickly fast forward along it to see what is ahead? Or do you orient the spatial elements without any vision? Or do you just have no idea, do you just get lost?

Just go, find a housemate or a random stranger outside a coffee shop or your closest love, go ask them.

I’ll be here writing about six more blog posts about this, I haven’t begun.

Pressure cooked split black urad dhal

This is a recipe from Phil the Dhal, a friend who has been experimenting with pressure cooking Indian food. I’m posting it here so it doesn’t get lost. Ask if you want this just to be the first of a series!

First pressure cook of split urad dhal was a success.

Needs some understanding and refinement of what you can put in with the dhal in the pressure cooker. i.e. salt and turmeric are uncontroversial, but stuff like chilli powder or garlic and ginger paste or chopped chillies I don’t fully understand.

This is using a Duromatic Inox Frying Pan Pressure Cooker 24cm / 2.5L.

Part 1 – Pressure cooking

1. Wash the dhal about 3 times… no need to soak overnight. Fudco products always take less washing but are slight more expensive. 1 cup of dhal and 3.5 cups of water.

Put it in the pressure cooker.

2. Make garlic and ginger paste. I’ve used a thumb of ginger and 4 plump garlic cloves.

3. Add garlic and ginger paste, 1 tsp of haldi (turmeric) and 1 tsp of pink salt to the dhal.

4. Put the lid on and turn heat to maximum

5. When you see the red line appear thus, turn down the heat to very low.

12 minutes and the pressure cooking part will be done. Turn off the heat.

Part 2 – The tarka

1. Pretty much anything goes here according to taste.

I’m going to use cumin seeds, mustard seeds, medium onion, some red chillies and a large pinch of asifoteda.

I’ve decided to chuck in a few black cardamom pods to help me understand them

Make roasted curry powder as described in this Sri Lankan recipe – or you can substitute with twice as much garam masala.

So, that’s my prep done.

2. I have to wait until the pressure drops naturally in the pressure cooker

Most recipes say to let the pressure drop naturally. No idea why, perhaps the steam is still useful for the cooking / sauce. You can press the valve and release the steam.

Pressure is fully released:

3. I added a bit of extra water and frozen peas. Put on a low heat and bring to a simmer.

4. Cook the whole spices in hot oil until they pop and splutter.

5. Then reduce heat and add the onions. Cook for about 10 mins.

6. In with 1 tsp of roasted curry powder and asifoteda and cook for 3 mins or so.

7. Add the spice mixture to the dhal and stir. Leave it for 15 mins stirring occasionally.

8. Grate in a bit of palm sugar. And stir.

9. Add some fresh coriander (do use the stalks as they are tasty).

Squeeze in a bit of lime and stir again and you’re done!

You could do the recipe very simply, just using cumin seeds, cumin and coriander powder, salt, turmeric, chilli garlic and ginger if you wanted

Urad dhal is very warming.

So, there you go… an easy dhal recipe for the autumn winter months!