Cognitive Outward: How Culture and Technology Affect the Human Brain

Teacher

Professional
Messages
2,670
Reaction score
798
Points
113
e1a9e8bfc922ad711e607.gif


What has changed the structure of the modern human brain more - biological evolutionary processes or cultural environment? Is it still relevant to compare human cognitive abilities with the work of a computer that processes information? And is the accessible Internet dangerous for our memory? Psychologist Maria Falikman answered these questions in her lecture on the nature of human cognition.

The brain as a computer and the beginning of cognitive science

What makes a person a person? There have been many attempts to answer this question in the history of philosophy and psychology. There is a line that stretches from Blessed Augustine to the great Russian physiologist Ivan Sechenov and to modern psychologists who believe that a person is made a person by will, or the possibility of free choice. For the rationalist philosophers headed by René Descartes, the specificity of man is the ability to think and be aware. Domestic classic Lev Vygotsky believed that people become a person through control of themselves and their knowledge with the help of special psychological tools. One of his followers, the modern psychologist Michael Tomasello, believes that a person becomes a person, sharing goals and intentions with others, sharing information with others, etc.
When psychology tries to explain what a person is, it has the opportunity to go along several paths. She can explain the psyche, proceeding from its laws, focusing on the principle of closed mental causality proclaimed by the founding father of psychology, Wilhelm Wundt. He may try to reduce the essence of a person to biological principles (primarily the features of the brain) or to the laws of society. And psychology can gracefully wriggle out and say that each of the factors contributes to the formation of the human in a person.
When psychology first appeared as a science in the last quarter of the 19th century, it started with the use of metaphors, comparing human consciousness now with the field of vision, where there is focus and periphery, now with a stream that is continuous, unique , etc. years, another interesting metaphor arose, which was invented by the creator of the architecture of the modern computer, John von Neumann. In 1948, during a speech at a symposium on brain mechanisms of behavior, von Neumann said that
Since the human brain processes information, then, most likely, the human brain is a kind of computer. In this case, the human psyche is the processing of some information, which means that it is possible to describe cognition in the language of computer programs.
Now such a comparison seems trivial, but then, as historians of science later wrote, it smelled of science fiction.
The so-called cognitive revolution has begun. In the 1930s and 1940s, computers and computer science were actively developing thanks to the works of Alan Turing, the same John von Neumann, Claude Shannon, Norbert Wiener. They all began to ask a logical question: when computers become more advanced and we create an artificial intelligence, how will we know that we have created it? What does a computer understand in general when it processes information, how does it solve problems, how does it achieve its goals? But it turned out that psychology does not know how a person does it. Cognitive psychology tried to answer these questions, starting from the assumptions dictated by the metaphor of John von Neumann and considering cognition as the processing of information (that is,
In the middle of the 20th century, it was believed that the structure of the brain is not particularly important for understanding cognition, but at the turn of the 20th - 21st centuries, everything changed. After the cognitive revolution, attempts were made to describe human cognition in the language of engineering systems. However, it turned out that
if we drive a person into specific experimental situations, then often his behavior is not predicted by what the model dictates. His memory works differently from the memory of a computer; he makes decisions differently from the machine.

Man is not a computer
In the 1970s, autobiographical memory researcher Elizabeth Loftus discovered that people's memories of an event are highly dependent on how the person is questioned about the event. For example, if you ask how fast the car was going until it hit a post, the person is likely to actually remember that the car hit the post, when in fact it collided with another car. That is, by asking, we are actually able to form memories.
In 2002, Daniel Kahneman received the only still for psychologists Nobel Prize (Kahneman - Nobel Prize in Economics in 2002 "for the use of psychological techniques in economics, in particular - in the study of the formation of judgments and decision-making underty." - NB. T&P ). He found that a person makes decisions not as a rational subject, but depending on the context or "frame" in which the information is presented. It turned out that our cognitive system is far from a computer that processes information according to certain rules.
Why is our cognitive system wrong? For example, when, in the illusion of Roger Shepard, it seems to us that the distant monster is larger than the close one, although in fact they are absolutely identical. Or, like in the famous story, when we watch the players passing the ball to each other and count the number of passes, but we completely ignore the gorilla walking across the screen in front of our eyes. Okay, us, but why don't experienced radiologists see the same gorilla in the lungs when they look at patients' images for pathology?
We may be wrong when our cognitive system simply does not cope, for example, in an environment in which it has never been before. Perhaps this is due to the fact that in this way we build the contents of our own psyche, preparing or expecting to perceive one thing and not the other. Another likely cause of our mistakes is evolutionary. Modern cognitive biases theorists David Bass and Marty Hazelton tried to follow this path, who suggested that cognitive errors are a consequence of a shift in the criterion on the basis of which we make decisions in an evolutionarily preferable direction.
Which is better in terms of survival - mistaking a snake for a stick or a stick for a snake? Experiments show that people tend to recognize a stick as a snake, that is, give a false alarm.
Bass and Hazelton extend this theory to a fairly wide range of phenomena - for example, xenophobia and even the assessment of sexual partners.

f11b9c4f93c6d9b94659d.gif


Vygotsky and Man in a Changing Culture
But the world in which we live now has nothing to do with the world for which biological evolution prepared us. So, in all likelihood, our cognitive system to a much greater extent depends not on the mechanisms that have developed in biological evolution, but on culture. It was this idea that was first expressed in the 1930s in the works of Lev Vygotsky, the author of cultural-historical theory. He suggested that
our psyche, in contrast to the psyche of animals, is characterized by the use of a special kind of psychological tools that a person can use to control his psyche in the same way as instruments of labor to control nature.
A newborn child cannot control his memory and attention, for this he needs just these psychological tools - cultural means that can only be taken from the external environment, from interaction with those who have already learned this. The development of higher mental functions follows the path from external to internal, that is, with age we use less and less external means of memorization, attention control, etc., and more and more - internal.
However, culture began to develop not in the way that Vygotsky had expected. Cognition management tools began to be taken out and delegated to modern high-speed technical devices that take on the function of reminding, directing attention, solving problems, etc. Modern philosophers Andy Clarke and David Chalmers even proposed the so-called concept of extended cognition - not to draw boundaries between the two. what happens in a person's head, the tools of cognition that he uses from the outside, and the environment in which all this unfolds.
Can we say that this is where human cognition ended - that, for example, we no longer need memory, because now we can store everything in a computer? Judging by the fact that memory was buried when both writing and printing appeared, the Internet is not afraid of it either. Can we say that we no longer need thinking, that our cognitive functions can be delegated to high-speed computers? Judging by the results of studies of human memory, there really is no longer a boundary between what is in our head and what is happening in the outside world.
We tend to recall not the information we found, but the place or query by which we found it.
Moreover, not only the ways of dealing with one's own memory are changing, but also its assessment. For example, if a person solves problems of recall and has the opportunity to go online, in a post-experimental interview he rates his memory higher than someone who was not allowed to look for an answer on the Internet.

Neuroarcheology and brain plasticity
Modern cognitive science comes to the conclusion that it is meaningless to study human knowledge as an established, formed, ready to use even in the context of culture: a person develops, culture develops, new practices and sign systems constantly appear. It is the developing person in the developing culture that needs to be studied. How to do it? From a methodological point of view, the most interesting answer to this question is not even a psychologist, but an archaeologist of Greek origin Lambros Malafuris. He develops the methodology of neuroarcheology - the reconstruction of the peculiarities of the work of the human brain and psyche based on artifacts from archaeological excavations. From his research, it becomes clear that it is impossible to separate the biological evolution of the brain, the evolution of cognitive functions and cultural practices. The owner of a certain psyche creates a certain cultural environment around him. The cultural environment, in turn, gives preference to the owners of a certain brain, carriers of certain mental functions that create a cultural environment, etc.
It turns out that our brain is not a biological object, it is a bio-artifact and was created by culture no less than biological evolution. Culture creates functional systems of the brain and its structural features, which will be fixed in evolution for a long time. The main idea of modern theorists is that it is not the human cognitive system as such, not memory, not attention, and not thinking as such that evolves, but the readiness of these systems to change or develop. Evolution chooses those with the most plastic brains.
 

Strategies, Brain, Neural Networks and Cognitive Science (Part 1)​


011d4b501595c3e2e8d5b.png


Look, this is a test. Ten crows were sitting on the roof. The hunter fired at one of them with his rifle. How many crows are left on the roof? Think a little about this before reading on.
The logical answer is, of course, nine. Any digital computer would answer you like this. In fact, the correct answer is ten. The gun fired softly and the roof was flat. The dead crow fell and remained on the roof without the other nine noticing. But wait a minute! Suppose the gun fired loudly and the roof was sloped. Then the answer will be none, because the killed crow will roll off the roof, and the others, frightened by the noise of the shot, will fly away. These are questions that are posed by real life. Real life does not provide simple answers to complex and contextual questions. In real life, your brain works differently from how a digital computer works.
In the test, the logically correct answer is nine, but nine is only one answer. There are other ways to think about this. Thinking logically is not the same as just thinking. Let's go back and notice how your brain changed its emphasis when you realized this was not a logic test. The full context of the problem determines how we think about it. The context is part of the information. A new idea is rarely the result of logical thinking. Logical thinking most often leads to a contradiction in terms. In fact, human beings are not particularly good at thinking logically. The laws of thinking are not laws of logic.
These sections are addressed to cognitive science and modern neuroscience. Most of the work on the brain and thinking today is in an area called neurocomputing, which spans nearly 40 years of studying the architecture of neural networks in the brain, very different from the stimulus / response brain reflex-arc model that laid the foundation for the development of digital computers 40 years ago and also serves as a model. for strategies in NLP.
The real brain does not work in a fixed step; one cell interacts with the processes of other cells. The real brain is a globally interconnected, widespread, simultaneously operating many parallel processes. Remember when we told you that NLP strategies follow a chain well, step by step, one after the other? Okay, we cheated. All of this is happening at the same time.
Obviously, our current understanding of internal computation in NLP needs updating. This is the first of a series of articles written to improve our strategy models and bring together some of the valuable discoveries of the past several decades that have now revolutionized computer science, neuroscience, and cognitive psychology.

But first, a little history​

Ever since Egyptian surgeons dissected the brain 5,000 years ago, we have tried to understand it and find models for how it functions. Aristotle thought that the function of the brain was to cool the blood. Descartes thought that consciousness is stored in the Pituitary and that the rest of the brain contains memory in the form of trajectories.
Other metaphors usually explain how the brain works in terms of human-made devices, such as pipe communications, telephone exchanges, or the more recent Neuro-Linguistic Programming (so named in 1975 because it absorbed knowledge from neuroscience and linguistics). ). Programming refers to how we organize our actions and ideas to create a result, and the metaphor comes from the science of computers (cybernetics).
There are notable differences between computers in 1975 and computers today. Obviously, the computer on my desk is more powerful than the one that occupied the entire first floor of the University of London when I studied there in the 1970s. Computers have changed overnight in design and power, and our understanding of how the brain works varies greatly. The digital computer has never been an adequate model of the brain. Artificial intelligence is not a substitute for real things.
Our assumption is that NLP is caught up in an outdated metaphor.

Digital computers
The origins of the idea of the Thinking Machine may have come from George Boole's 1854 book A Study of the Laws of Thinking. Boole described the way to define logic mathematically. He believed that the connection between algebra and language demonstrates a higher logic, which he called the laws of thought. Boolean logic is now widely used in digital computers.
The next major step was taken nearly 100 years later by Alan Turing, who developed a generalized computation model that is still the basis of the most complex and powerful machines in operation today. He assumed that machines could manipulate the binary algebra of zero and one to solve any mathematical problem.
John von Neumann took Turing's idea and put it into practice. He was surprised at how consciousness works and believed that he was modeling the brain. Von Neumann created a computer, the design of which was an innovation at the time, a memory module that stored together numbers for calculations and instructions (programs) for executing them. This was a big step forward, including computers that required flashing for every other type of operation. Von Neumann thought that this shared memory for data and programs was a model for mental flexibility. However, this created a bottleneck where the contents of the memory could be checked one item at a time. Modern computers have advanced in the speed with which these operations can be performed, but the bottleneck remains. The brain does not have such a bottleneck.
A digital computer differs from the way our brains work in several ways. Digital computers operate through a central processing unit to manage data. They manage data one block at a time, and although there are many of them and they work in parallel, they all work in a linear and sequential way. Because of this bottleneck, the faster they can run and the more units can run at the same time, the better.
It reminds me of the somewhat counterintuitive picture of people testing Einstein's brain to see if it is larger than the average human brain. Since digital computers operate in a linear and sequential way, they work through the algorithm of cause, effect and logic IF ... THEN ... sequence. Modern research paints a picture of the brain that describes processes that are far more complex.
Digital computers are often too accurate for their own use. If the answer has to be yes or no, it paradoxically limits your thinking. More often it needs to be "Maybe" or "Maybe" depending on what else is going on. High precision may be necessary in math problems, but more often than not it is a debt of honor. We select and filter from a range of possibilities and keep many choices open as possibilities, rather than trying to resolve them. The result of one chain of thought is to return to a similar process to refine it. Paradoxically, it takes up tremendous computing power in the computer to replicate this natural quality of human thinking.
Now for some really drastic differences. Digital computers do not learn, they disseminate knowledge, representing a metaphor of the human brain as a library - a database. In a computer, data is independent of the system that contains it. In a library or database metaphor, it doesn't matter in which library you find the book, it will be exactly the same. A book or database can be transferred from one system to another without modification.
We can now see where this metaphor is broken. You cannot transfer knowledge from one consciousness to another. The meaning of this article will not be the same for you as it will be for me. The meaning depends on the context, as any crow will tell you.
There is a famous story about computer analysis of cases collected in Home Accident Reviews, where accidents on stairs were statistically investigated and it was found that most of them occurred in the first and last stages. A logical suggestion: remove the first and last steps. The computer requires a programmer who is external to it.
There is hope and promise that computers will think and beat people at their own games. The best example of this is perhaps the work that stems from the development of computer programs that would be able to play chess and confront the highest chess masters. Initially, high hopes were pinned on this. This seemed like the perfect test. The chess players supposedly analyzed the sequence of possibilities in their minds and the correct move was the one that brought victory or gave an advantage at the end of the game. The best chess players in relation to this model were those who could see much ahead and analyze a larger tree of possible steps.
Unfortunately, people make mistakes as players. Either they do not consider the immediate possibilities, because the number of possible steps on the chessboard is astronomical, or they delve into the plan only to the extent that they can carry out this analysis without looking far ahead. Then their opponent would be surprised by their step, which he did not foresee. All computers had to do was to count many steps ahead, further than a human being is capable of as a chess player, and to consider all the opportunities that human beings missed. It's easier, at least in principle.
Although computer chess has taken a big step in the past 10 years, the gap between the best computer chess programs and the best chess players is wider than ever. Computers of the highest classification have a rating that places them in the first thousand world leaders.
When you model the success of a chess player, more often than not, you will find that they sense positions that they will not analyze. They base this feeling on something similar that happened in the past. They won't calculate as many steps forward as a computer, but just like a computer calculates, masters gain access to positions with their eyes in their brain. They discard many positions as undesirable without trying to analyze why they are specifically bad. When one of the best chess players was asked how many steps forward he sees, he replied: "One. But it is always the best!"
Even in the simplest game - checkers - the unofficial champion Dr. Marion Tinsley was defeated by his closest rival, the Chinook computer, in 2 games out of 40. A small retreat - Chinook could calculate 3,000,000 steps per minute and view 20 steps ahead. Dr. Tinsley, after playing five games after becoming champion in 1955, said, "Chinook was programmed by a man, and I was programmed by God."

Neuro-linguistic metaphor
How does the programming metaphor affect NLP? Let's think about modeling. NLP was originally developed by extracting patterns of idiosyncretic genius (Perls, Satyr and practically Erickson) and applying them in various fields. It was surprisingly convenient and creative in some ways and disastrous in others. If you extract Erickson's amazing hypnotic skills and treat them as if they can be transferred regardless of Erickson's ethics and values, then you are asking for trouble. And the problems, thus, will be proportional to the power of the tools that you possess. Perhaps this is why Gregory Bateson, who confirmed the Structure of Magic I, later said: "NLP? If you met with NLP - run as fast as possible in the opposite direction. I stopped sending people to study Milton,
Many NLP techniques read like algorithms. Step 1: get rapport. Step 2: access the state. Step 3: .... These step-by-step models of techniques are convenient as long as we remember that they don't really happen sequentially. They are a convenient fiction, a frozen abstraction. What does it mean to do a six-step reframing with the generation of new behavior during anchor collapse using a metaphor?
Another major area where the programming metaphor has been effective is strategy and modeling. Most stimulus-response anchors and modeling of internal process strategies were based on the rebellion of Miller, Galanter, and Pribram against the constraints (and behavioral tyranny) of stimulus-response reflexes in the central nervous system. Slightly earlier than in 1923, its discoverers (Sherington and Pavlov) referred to the stimulus-response reflex as just a convenient fiction. Their model was improved by Miller et al. By adding feedback to a historically consistent neural communication model.
The accepted wisdom of strategies is that you extract physiology, beliefs, and intrinsic sequences of sensory representations with associated submodalities. Strategy diagrams are mapped as algorithms with loops, pointers and steps. These maps are not territory.

The brain is not a computer
The human brain weighs about 3 pounds and contains over 100 billion neurons. The cerebral cortex contains over 10 billion neurons. These are the connections between nerve cells that are more important than the cells themselves. A single neuron can have up to 100 thousand inputs. The cortex contains over a million billion connections. If you counted them one per second, it would take you 32 million years.
We do not have such electronics. No two brains are exactly alike. We are born with all our neurons, and in the first year of our life, up to 70% of them die before some structure is formed. The surviving neurons form an even more complex network of connections, and our brains quadruple in size. Certain ties are strengthened by taking advantage of the deaths of others. We learn from consequences and mistakes. Nerve cells specialize and form a hyperdense network. The brain is not independent of the world, it is shaped by the world. The brain is often described today by neuroscientists as an interconnected, decentralized, parallel functioning, dispersed network of simultaneous waves of interactively resonating patterns. The brain represents a great many hopes and fears at the same time.
The computer metaphor would have a mind control symbolic system based on logical laws. If that were the case, then it could in fact be studied independently of the brain. Consciousness is not the brain, and it is very risky to build theories of how consciousness works without considering how the brain works. The brain is superior to all models because it builds all models.
The brain uses processes that change themselves. They create memory that changes the way we think about the future, they make changes in ourselves. We build perceptual filters that determine what to look for. We pay attention to something to reinforce the networks and thus build perceptual filters. The brain has to simulate many different possibilities of the future world at the same time. We cannot know in advance what to look for, because the world does not come to us with labels glued on. We attach labels and then often forget that we did it, thinking that labels are an immediate part of the world. Computers can expand the nervous system; they cannot replace or simulate it. In fact, many cybernetics build computers simply to better understand how they think
Our second article will examine the types of neural networks in computers that are modeled on the way the brain works, and then begin to explain how to incorporate a new model of strategies that are not so digitally justified.

At the end - a story from Gregory Bateson.
He talks about a man who wanted to know about the brain - how it actually works and whether computers will ever be smarter than humans. This man entered the following question into the most powerful modern computer (which occupied an entire floor of the university): "Do you think you will ever think like human beings?"
The machine rumbled and muttered as it began to analyze its own computing power. Finally, the machine printed its answer on a piece of paper. The man hurriedly and excitedly read these neatly typed words: "This reminds me of history ..."
 
Top