Some interesting ideas about human augmentation. Published in 2004, so many of the specific examples come across as quaint and endearing rather than high-tech. Spends a lot of time repeating itself without adding new arguments, and attempting to contest popular fears without really going behind “yes, that could happen, so let’s be careful”. Nethertheless, the parts on distributed cognition are interesting and it’s much more readable than the authors more recent book on the topic.
Argues that augmenting our minds and bodies with technology is a uniquely human ability, and one that has been going on since the beginnings of language.
Basic example of augmentation - long addition using combination of biological pattern matching for single digit sums and pen + paper + trained motor control + vision for short-term memory and problem structure.
Alarms, calendars, notepads, whiteboards, smartphones, computers etc. We structure our environments to support cognition. Take a highly-functioning executive and strip them of their calendars and notebooks and then watch them flounder - like giving them a partial lobotomy.
Body image is plastic. Readily incorporates tools eg brain scans of tennis players show that the racquet is included in their body map. Can easily alter body shape eg fool brains into modelling a meter-long nose or into anticipating pain when a plastic arm is struck with a hammer.
Senses are plastic too eg can turn vision upside down and subjects can function normally after a few weeks.
Language allows concretizing abstract concepts, making them easier to work with. eg chimps only learned to make higher-order analogies if they were taught some symbolic system first. Mathematics similarly grounds abstract concepts so that they can be processed with symbol-shuffling. Hypothesizes that language is required to be able to think reflectively eg “why do I believe X?”.
Vision uses environment as it’s own best model - repeated saccades to retrieve data rather than caching it in short-term memory. Some kinds of processing require real vision eg subjects who were briefly shown a dual-interpetation image could only figure out one interpretation, but some of them could find the second after drawing the image from memory.
What does it mean to ‘know’ something:
- “Do you know what colour that book is?” “Yes, (looks at book), it’s red”
- “Do you know the time?” “Yes, (looks at watch), it’s 14:00”
- “Do you know the capital of Argentina?” “Yes, (retrieves from long-term memory), it’s Buenos Aires”
- “Do you know the capital of Argentina?” “Yes, (looks up on phone), it’s Buenos Aires”
Can be said to ‘know’ something if it can be quickly and reliably retrieved, whether from the visual field, from a reliable tool (like a watch), from biological long-term memory or from a smartphone. The latter is currently high-latency and unreliable, but it’s only a matter of degree and it will improve over time.
(Human long-term memory can do interesting things like making connections in the background. Machine long-term memory can do interesting things like large-scale search, software agents, collaborative editing etc. No reason to see the former as ‘real’ memory just because it happens to come preinstalled.)
Similarly, what does it mean to control something:
- Mentally command hand to pick up cup. Autonomous motor control circuits control muscles to achieve the goal. Sensation relayed from nerves indicate when the job is done.
- Mentally command robot to pick up cup. Autonomous program controls motors to achieve the goal. Signals relayed from sensors to brain interface indicate when the job is done.
- Mentally command robot via controller to pick up cup. Autonomous program controls motors to achieve the goal. Signals relayed from sensors to the brain via a visual display indicate when the job is done.
What seems to matter in these cases is the presence of some kind of local, circular process in which neural commands, motor actions and sensory feedback are closely and continuously correlated.
The difference is only in the degree of bandwidth and latency, not innate. No reason why direct brain interface is special - mechanical/visual/auditory/haptic interfaces can be just as transparent. Beginners play the guitar - experts play music. Tools can become transparent, to the point that we forget that there is a brain stem, nervous system, motor control circuits, muscles, wood and strings between us and the music.
Useful to be able to flip between transparent and visible. Can watch physical actions on the guitar and figure out how to improve them. Can’t introspect on most of our brains mechanisms though - they are stuck in transparent mode.
Self is about (reliable) control and (high-bandwidth, low-latency) feedback. Not flesh-and-blood connection.
If we try to divide the system into user and tools we quickly get into trouble. Even inside the brain, no single part is in charge.
Oh, poor soul - she is not really responsible for that painting/theory/poem; for don’t you see how she had to rely on pen, paper, and sketches to offset the inadequacies of her own brain?
Give up on trying to locate the self and just accept that we are a dynamic conglemerate of feedback loops (kind of a Katamari Damacy vision of the self).