The topic of this post is how the process of becoming more proficient in an area works, which I would base on being able to execute specific tasks and the quality and speed of execution. For example, how it is that some experienced piano players can play a complex song on the first try, which would have been impossible for them at an early point in their career as piano players?

Remember Attention & Focus?

Since this is a continuation of the last blogpost about “Focus & Attention”, let us go through a short recap. Now that we have clarified that we only have a limited amount of attention, based on our limited information processing rate of around 60 bits/sec, it is only natural to ask oneself whether we can improve this rate. But the topic of this blogpost is about Models, Abstraction & Automation, so the experienced reader may already know what is to be expected now… A Plot Twist! Because I would like to emphasize how strange the measurements are taken to get an estimate of this processing rate actually are. The researchers took some random people (most likely adults) and let them read a text, then based on the information in each letter of a word of a sentence, they measured how much information they processed [1].

Introducing Models

But now consider this: actually, we do not read letters, but words with their corresponding meaning, and people who read very much tend to read sentence structures based on the grammatic of a language which is ingrained into your head. This allows us to significantly lower the amount of information that our brain needs to process while still being able to comprehend the text. This process is in informatics often called compression. For example, the JPEG format is often used for images. To prove this point here is an easy example: Lst wesday I wnt to the bech. I guess almost everyone who reads an English text from time to time is able to understand what the meaning of this sentence is. This argument works in favor that we actually do not read every single letter but simplify the letters to known patterns such as words and sentences. Just like the brain always tries to see patterns in things - sometimes even if there are no patterns to be observed and it is just completely random. Based on this, one could argue that we actually can process significantly less information than the measured 60 bits/sec, which were measured by those cognitive researchers because we do not actually read letter by letter, but from word to sentence to word. In the same way, if you see children reading a word letter by letter, they are actually really slow until the transition to word reading and then reading whole sentences takes place. The same pattern applies to playing the piano where experienced piano players do not read a play note by note but only look at the patterns in the note which they know how to interpret by playing the right notes, or chess grandmasters who see the positions and patterns in the chess figures instead of each individual chess piece by its own, allowing them to limit their search for their next best move.

Let’s get Abstract

All of the described techniques above can be interpreted as models of abstraction that lower the amount of information the brain needs to process in order to achieve a specific goal and thereby leading to a significant speed-up in achieving it. This is very similar to the way calculations can be simplified in physics when using the correct assumptions and formulation. For example, when one tries to establish the equation of motion for a double pendulum, it is advisable to do so by using the Lagrangian formulation. This allows simplifying the process (fewer steps needed & fewer variables) of establishing the correct formula substantially. One very important thing a good physicist does when simplifying the model he works with is to check whether the assumptions the simplifications are based on are actually true. Only this step allows verifying that the result of his work has any real-world value.

Always these Assumptions

So in order for our abstractions to work, it is important that some underlying assumptions hold true. When reading, this would be the grammatical structure of sentences allowing us to predict what type of word will follow another word and how they are related. This represents an interesting connection to Thinking, Fast and Slow, where this may be referred to as System 1 (which operates based on similarities) when it has been trained itself by reading enough literature to detect the patterns in sentences.

What makes a good Model?

Firstly, a good model simplifies whatever is described while still being descriptive enough to allow solving specific tasks. Another additional criterium is if the model is able to generalize, e.g., solving other tasks. As could be expected, these three criteria are often not compatible with each other, and there exist multiple trade-offs.

How to use this?

  1. Practice Models consciously
  2. Learn a wide variety of models
  3. Test if you overfit on a few favorite models of yours
  4. Sometimes, going to another model of thinking allows seeing new patterns
  5. Do not be discouraged by other people being faster than you. Often they just use better models, which you can learn by practicing them.
  6. In the area of your interest, I would advise you to learn all the models applicable since this allows you to be truly creative and determine the model best suited for any given task


Do not expect to see very fast improvements when applying these techniques because the learning process is actually very slow in general, but with time and practice, mostly others will gradually see improvements. Now finally, let us finish this with a fitting quote.

One generally overestimates what one can achieve in an hour but drastically underestimates what can be achieved in a year.

Internal Dialogue

Writing this specific blog entry took a surprising amount of time, even though the topic was clearly given, and I knew what I wanted to write. Also, even though the initial research was actually minuscule, it took me longer than writing Focus & Attention, which is actually very interesting since I did a lot more research for this one.