Friday, December 20, 2019

Art Movements

Timeline
Prehistoric Art (~40,000–4,000 B.C.)
Ancient Art (4,000 B.C.–A.D. 400)
Medieval Art (500–1400)
Renaissance Art (1400–1600)
Mannerism (1527–1580)
Baroque (1600–1750)
Rococo (1699–1780)
Neoclassicism (1750–1850)
Romanticism (1780–1850)
Realism (1848–1900)
Art Nouveau (1890–1910)
Impressionism (1865–1885)
Post-Impressionism (1885–1910)
Fauvism (1900–1935)
Expressionism (1905–1920)
Cubism (1907–1914)
Surrealism (1916–1950)
Abstract Expressionism (1940s–1950s)
Op Art (1950s–1960s)
Pop Art (1950s–1960s)
Arte Povera (1960s)
Minimalism (1960s–1970s)
Conceptual Art (1960s–1970s)
Contemporary Art (1970–present)
  -Postmodernism
  -Feminist art
  -Neo Expressionism
  -Street art
  -The Pictures Generation
  -Appropriation art
  -Young British Artists (YBA)
  -Digital art

Tuesday, November 19, 2019

The state is merely the modern pretence

“The state is merely the modern pretence, a shield, a make-belief, a concept. In reality, the ancient war-god holds the sacrificial knife, for it is in war that the sheep are sacrificed…So instead of human representatives or a personal divine being, we now have the dark gods of the state…The old gods are coming to life again in a time when they should have been superseded long ago, and nobody can see it.” (Carl Jung, Nietzsche’s Zarathustra)

Wednesday, November 13, 2019

The ever growing television


A friend's FaceBook post about Samsung's massive 219 inch MicroLED TV got me wondering if there was data about the average size of the TV by year. A quick Google search provided me with the following.


From Tribeca: Movie Industry Must Bring The Theater "Home"

Tuesday, November 12, 2019

Whoever fights monsters

"Whoever fights monsters should see to it that in the process he does not become a monster. And if you gaze long enough into an abyss, the abyss will gaze back into you" -- Friedrich Nietzsche

Sunday, October 13, 2019

Lazarus Lizard (Podarcis muralis)



I remember first seeing these lizards about fifteen years ago crawling on the rock walls while I was taking a walking through Mt. Lookout. Living in Northern Kentucky, I'd never seen the little critters before. Today, they seem to be everywhere.

In the Cincinnati area they are generally referred to as Lazarus lizards. Apparently they are not native to the area. According to Buckeye Yard & Garden Online:

"In 1951, 10-year-old George Rau Jr., step-son of Fred Lazarus III, came across some common wall lizards scurrying across rocky slopes while on a family vacation to Lake Garda in northern Italy. George smuggled a few (6 to 10 depending on the reference source) through Customs to release them at his family's home on Torrence Court located in the eastern Cincinnati suburb of Hyde Park.

Some of the European expats thrived to eventually become so numerous that Torrence Court became known as Lizard Hill. Local residents called them "Lazarus lizards" in misplaced recognition of the lizard's perceived patrons. Of course, they should have been named "George's Lizards" in honor of their true sponsor.

The lizard story may ring like local folklore; however, George Rau wrote a letter in 1989 to herpetologists at the Cincinnati Museum of Natural History detailing his role as the lizard leader. He also repeated his story in several interviews with the news media.

Research conducted by Cassandra Homan for her 2013 University of Cincinnati M.S. Thesis (see "More Information" below) added credibility to Rau's claim. She compared genetic samples collected from the Cincinnati lizards to samples taken from the reported source population in Europe and confirmed a substantial loss of genetic diversity indicating a genetic bottleneck. Her computer simulations suggested the bottleneck was likely associated with only three individuals surviving their release to become the founders of the Cincinnati populations."

Buckeye Yard & Garden Online: The Rise of Lazarus Lizards




Friday, October 4, 2019

Sucked into a story about crooked Chinese ex-mayor

A recent New York Post article stated that "Chinese authorities found more than 13 tons of gold stashed away in the basement of a former mayor’s home during a corruption investigation, according to news reports.  Police found the loot — worth hundreds of millions of dollars — in a secret cellar in the home of Zhang Qi, a onetime high ranking communist party official and former mayor of Danzhou, the Pakistani newspaper The News International reported.  And it wasn’t only gold — police also seized more than $37 billion in cash and assets."

New York Post: Cops in China find 13 tons of gold stashed in ex-mayor’s cellar

The story got me thinking. How does one hide 13 tons of gold and another $37 billion in physical cash and assets? For that matter how much space are we talking about here? What does a ton of gold look like? What does a billion dollars in bills look like? Well, let's find out.

Starting with the gold, I found this image of 80 replica gold bars representing one ton of gold.


From this we can visualize 13 tons of gold taking up about as much space as 3 or 4 washing machines. Not all that significant and easily doable for a crooked politician or villainous drug lord. 

More interesting is the $37 billion in cash and assets. We have no idea how much of the assets were in cash but even if only a small portion of it was cash it would take up a great deal of space. Here is artist Michael Marcovici's depictions of what $1 billion in $100 USD notes would look like.



Of course the bust happened in China, so it is likely the bills where not United States Dollars. Chinese banknotes are in denominations of 1, 5, 10, 20, 50, and 100 Yuans. As of today it takes 7.15 Yauns to equal 1 US Dollar. So if the crooked ex-mayor was hording Yuans you have to visualize the image above times seven just to equal one billion. 

So returning to my original thought, it would take a significant amount of space to hide $37 billions in assets (actually $37 and a half since the gold was worth about half a billion dollars). The article says the police found it in a secret cellar of Zhang Qi's home. I guess it was one hell of a cellar. 



Thursday, September 5, 2019

Relevance and Sufficiency

The relevance and sufficiency conditions of a good argument appears to be an expansion of the validity concept in classical logic. Splitting the validity condition provides the benefit of allowing the making of nuanced judgement about the level of premise support.(1)

I. Relevance
Relevance refers to premises that provide some evidence or offer reasons that support the conclusion or can be arranged in a way from which the conclusion can be derived. Relevance can be categorized as positive relevance, negative relevance and irrelevance.

A. Positive Relevance, Negative Relevance & Irrelevance
1. Positive Relevance
When assessing an argument we would say that statement A is positively relevant to statement B if the truth of A counts in favor of the truth of B. In other words, A provides some evidence or reason to believe that B is true.

In each of the following cases, the first statement is positively relevant to the second:
P. Smith has appendicitis, gout, and cancer of the bladder.
C. Smith is not healthy enough to run the 26-mile Boston Marathon.

Here the first statement provides evidence for the second statement, because it describes adverse aspects of Smith’s health, and good health is required to run a marathon.(2)

2. Negative Relevance
Statement A is negatively relevant to statement B if the truth of A counts against the truth of B. So if A is true, it provides some evidence or reason to believe that B is not true.

Consider the following examples of negative relevance:

P. Jogging often results in knee injuries.
C. Jogging improves a person’s general health.

In (d), the first statement is negatively relevant to the second, because having knee injuries counts against having good general health.(2)

3. Irrelevance
Statement A is irrelevant to statement B if it is neither positively relevant nor negatively relevant to B. In other words, when statement A does not logically support or logically undermine statement B, we would say it is irrelevant.

P. Natural catastrophes such as earthquakes are beyond human control.
C. Human beings have no freedom of choice concerning their actions.

Here, the first statement cites some natural events that are beyond human control and the second statement is about human choices about their own action. The truth of the first statement would not count as any reason to accept or reject the second statement, which is why (f) exemplifies irrelevance.(2)

B. Argument Analysis
If the premises of an argument, considered together, are irrelevant to its conclusion, or are negatively relevant, the argument is not cogent. Any case in which the relevancy condition of argument adequacy is not satisfied will be a case in which sufficiency is not satisfied either. (If premises are not even relevant to the conclusion, they cannot provide sufficient support for it.)(2)


II. Sufficiency
Sufficiency refers to the degree of support provided by the premises to support the conclusion. Whereas relevance is a property of individual premises, sufficiency is a judgement made about all the premises that support the conclusion. Hence, to be considered sufficient, the premises must provide enough support to reasonably accept the conclusion. 

This is perhaps the most difficult principle to apply because we have no clear guidelines to help us determine what constitutes sufficient grounds for the truth or merit of a conclusion. Most argumentative contexts are different and thus create different sufficiency demands. 

The feature of the sufficiency principle that is most difficult to apply is the assignment of weight to each piece of supporting evidence.  Some sciences have well-developed sufficiency criteria in place. Statisticians, for example, have determined what constitutes a proper sample from which to draw defensible conclusions. But in informal discussion, it is usually very difficult to determine when enough evidence or evidence of the appropriate kind has been presented.





(1)Stanford Encyclopedia of Philosophy: Fallacies
(2)A Practical Study of Argument: Grover
Informal Logic and logic: Blair
Attacking Faulty Reasoning: Damer


Wednesday, May 22, 2019

Research & Investigative Sources

Research
Corruption 
Corruption Perceptions Index: Transparency International(Germany)
-Ranks countries "by their perceived levels of public sector corruption, as determined by expert assessments and opinion surveys."

Worldwide Governance Indicators: World Bank
-Measure the quality of governance in over 200 countries, based on close to 40 data sources produced by over 30 organizations worldwide

Civil Liberties/Political Rights
-For each country and territory, Freedom in the World analyzes the electoral process, political pluralism and participation, the functioning of the government, freedom of expression and of belief, associational and organizational rights, the rule of law, and personal autonomy and individual rights.

-The Human Freedom Index presents the state of human freedom in the world based on a broad measure that encompasses personal, civil, and economic freedom.

Press
-Measuring the level of freedom and editorial independence enjoyed by the press in nations and significant disputed territories around the world.


Internet
-Provide analytical reports and numerical ratings regarding the state of Internet freedom for countries worldwide.

Economic Freedom
Index of Economic Freedom: Heritage Foundation & Wall Street Journal (US)
-An annual index and ranking to measure the degree of economic freedom in the world's nations.


Political/Religious Violence
-Non-governmental organization specializing in disaggregated conflict data collection, analysis, and crisis mapping. ACLED codes the dates and locations of all reported political violence and demonstration events in over 150 countries in real-time.

Crime

-List of countries by UNODC homicide rate per year per 100,000 inhabitants
United States Newspapers by State: 50states.com

Newspaper Archives
Chronicling America: Library of Congress
-Search America's historic newspaper pages from 1789-1963 or use the U.S. Newspaper Directory to find information about American newspapers published between 1690-present.

City Data
Advanced U.S. city comparison: City Data
-Compare cities in the U.S. by population, crime, education and other data.



Critical Race Theory


Film 
-AGFA exists to preserve the legacy of genre movies through collection, conservation, and distribution

Investigation

Address, Phone Number Search
Einvestigator: Reverse Address or Phone Number Search

ABA Bank Routing Transit Numbers

Wednesday, May 8, 2019

The Map Is Not the Territory

The Map Is Not the Territory by Shane Parrish



The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. A map can also be a snapshot of a point in time, representing something that no longer exists. This is important to keep in mind as we think through problems and make better decisions.

“The map appears to us more real than the land.”

— D.H. Lawrence
The Relationship Between Map and Territory

In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.

However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.

In Korzybski’s words:


A.) A map may have a structure similar or dissimilar to the structure of the territory.

B.) Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.

C.) A map is not the actual territory.

D.) An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.

Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. A map with the scale of one mile to one mile would not have the problems that maps have, nor would it be helpful in any way.

To solve this problem, the mind creates maps of reality in order to understand it, because the only way we can process the complexity of reality is through abstraction. But frequently, we don’t understand our maps or their limits. In fact, we are so reliant on abstraction that we will frequently use an incorrect model simply because we feel any model is preferable to no model. (Reminding one of the drunk looking for his keys under the streetlight because “That’s where the light is!”)

Even the best and most useful maps suffer from limitations, and Korzybski gives us a few to explore: (A.) The map could be incorrect without us realizing it; (B.) The map is, by necessity, a reduction of the actual thing, a process in which you lose certain important information; and (C.) A map needs interpretation, a process that can cause major errors. (The only way to truly solve the last would be an endless chain of maps-of-maps, which he called self-reflexiveness.)

With the aid of modern psychology, we also see another issue: the human brain takes great leaps and shortcuts in order to make sense of its surroundings. As Charlie Munger has pointed out, a good idea and the human mind act something like the sperm and the egg — after the first good idea gets in, the door closes. This makes the map-territory problem a close cousin of man-with-a-hammer tendency.

This tendency is, obviously, problematic in our effort to simplify reality. When we see a powerful model work well, we tend to over-apply it, using it in non-analogous situations. We have trouble delimiting its usefulness, which causes errors.

Let’s check out an example.

***

By most accounts, Ron Johnson was one the most successful and desirable retail executives by the summer of 2011. Not only was he handpicked by Steve Jobs to build the Apple Stores, a venture which had itself come under major scrutiny – one retort printed in Bloomberg magazine: “I give them two years before they’re turning out the lights on a very painful and expensive mistake” – but he had been credited with playing a major role in turning Target from a K-Mart look-alike into the trendy-but-cheap Tar-zhey by the late 1990s and early 2000s.

Johnson’s success at Apple was not immediate, but it was undeniable. By 2011, Apple stores were by far the most productive in the world on a per-square-foot basis, and had become the envy of the retail world. Their sales figures left Tiffany’s in the dust. The gleaming glass cube on Fifth Avenue became a more popular tourist attraction than the Statue of Liberty. It was a lollapalooza, something beyond ordinary success. And Johnson had led the charge.

“(History) offers a ridiculous spectacle of a fragment expounding the whole.”

— Will Durant

With that success, in 2011 Johnson was hired by Bill Ackman, Steven Roth, and other luminaries of the financial world to turn around the dowdy old department store chain JC Penney. The situation of the department store was dour: Between 1992 and 2011, the retail market share held by department stores had declined from 57% to 31%.

Their core position was a no-brainer though. JC Penney had immensely valuable real estate, anchoring malls across the country. Johnson argued that their physical mall position was valuable if for no other reason that people often parked next to them and walked through them to get to the center of the mall. Foot traffic was a given. Because of contracts signed in the ’50s, ’60s, and ’70s, the heyday of the mall building era, rent was also cheap, another major competitive advantage. And unlike some struggling retailers, JC Penney was making (some) money. There was cash in the register to help fund a transformation.

The idea was to take the best ideas from his experience at Apple; great customer service, consistent pricing with no markdowns and markups, immaculate displays, world-class products, and apply them to the department store. Johnson planned to turn the stores into little malls-within-malls. He went as far as comparing the ever-rotating stores-within-a-store to Apple’s “apps.” Such a model would keep the store constantly fresh, and avoid the creeping staleness of retail.

Johnson pitched his idea to shareholders in a series of trendy New York City meetings reminiscent of Steve Jobs’ annual “But wait, there’s more!” product launches at Apple. He was persuasive: JC Penney’s stock price went from $26 in the summer of 2011 to $42 in early 2012 on the strength of the pitch.

The idea failed almost immediately. His new pricing model (eliminating discounting) was a flop. The coupon-hunters rebelled. Much of his new product was deemed too trendy. His new store model was wildly expensive for a middling department store chain – including operating losses purposefully endured, he’d spent several billion dollars trying to effect the physical transformation of the stores. JC Penney customers had no idea what was going on, and by 2013, Johnson was sacked. The stock price sank into the single digits, where it remains two years later.

What went wrong in the quest to build America’s Favorite Store? It turned out that Johnson was using a map of Tulsa to navigate Tuscaloosa. Apple’s products, customers, and history had far too little in common with JC Penney’s. Apple had a rabid, young, affluent fan-base before they built stores; JC Penney’s was not associated with youth or affluence. Apple had shiny products, and needed a shiny store; JC Penney was known for its affordable sweaters. Apple had never relied on discounting in the first place; JC Penney was taking away discounts given prior, triggering massive deprival super-reaction.

“All models are wrong but some are useful.”

— George Box

In other words, the old map was not very useful. Even his success at Target, which seems like a closer analogue, was misleading in the context of JC Penney. Target had made small, incremental changes over many years, to which Johnson had made a meaningful contribution. JC Penney was attempting to reinvent the concept of the department store in a year or two, leaving behind the core customer in an attempt to gain new ones. This was a much different proposition. (Another thing holding the company back was simply its base odds: Can you name a retailer of great significance that has lost its position in the world and come back?)

The main issue was not that Johnson was incompetent. He wasn’t. He wouldn’t have gotten the job if he was. He was extremely competent. But it was exactly his competence and past success that got him into trouble. He was like a great swimmer that tried to tackle a grand rapid, and the model he used successfully in the past, the map that had navigated a lot of difficult terrain, was not the map he needed anymore. He had an excellent theory about retailing that applied in some circumstances, but not in others. The terrain had changed, but the old idea stuck.

***

One person who well understands this problem of the map and the territory is Nassim Taleb, author of the Incerto series – Antifragile , The Black Swan, Fooled by Randomness, and The Bed of Procrustes.

Taleb has been vocal about the misuse of models for many years, but the earliest and most vivid I can recall is his firm criticism of a financial model called Value-at Risk, or VAR. The model, used in the banking community, is supposed to help manage risk by providing a maximum potential loss within a given confidence interval. In other words, it purports to allow risk managers to say that, within 95%, 99%, or 99.9% confidence, the firm will not lose more than $X million dollars in a given day. The higher the interval, the less accurate the analysis becomes. It might be possible to say that the firm has $100 million at risk at any time at a 99% confidence interval, but given the statistical properties of markets, a move to 99.9% confidence might mean the risk manager has to state the firm has $1 billion at risk. 99.99% might mean $10 billion. As rarer and rarer events are included in the distribution, the analysis gets less useful. So, by necessity, the “tails” are cut off somewhere and the analysis is deemed acceptable.

Elaborate statistical models are built to justify and use the VAR theory. On its face, it seems like a useful and powerful idea; if you know how much you can lose at any time, you can manage risk to the decimal. You can tell your board of directors and shareholders, with a straight face, that you’ve got your eye on the till.

The problem, in Nassim’s words, is that:


A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks.

In order to come up with the VAR figure, the risk manager must take historical data and assume a statistical distribution in order to predict the future. For example, if we could take 100 million human beings and analyze their height and weight, we could then predict the distribution of heights and weights on a different 100 million, and there would be a microscopically small probability that we’d be wrong. That’s because we have a huge sample size and we are analyzing something with very small and predictable deviations from the average.

But finance does not follow this kind of distribution. There’s no such predictability. As Nassim has argued, the “tails” are fat in this domain, and the rarest, most unpredictable events have the largest consequences. Let’s say you deem a highly threatening event (for example, a 90% crash in the S&P 500) to have a 1 in 10,000 chance of occurring in a given year, and your historical data set only has 300 years of data. How can you accurately state the probability of that event? You would need far more data.

Thus, financial events deemed to be 5, or 6, or 7 standard deviations from the norm tend to happen with a certain regularity that nowhere near matches their supposed statistical probability. Financial markets have no biological reality to tie them down: We can say with a useful amount of confidence that an elephant will not wake up as a monkey, but we can’t say anything with absolute confidence in an Extremistan arena.

We see several issues with VAR as a “map,” then. The first that the model is itself a severe abstraction of reality, relying on historical data to predict the future. (As all financial models must, to a certain extent.) VAR does not say “The risk of losing X dollars is Y, within a confidence of Z.” (Although risk managers treat it that way). What VAR actually says is “the risk of losing X dollars is Y, based on the given parameters.” The problem is obvious even to the non-technician: The future is a strange and foreign place that we do not understand. Deviations of the past may not be the deviations of the future. Just because municipal bonds have never traded at such-and-such a spread to U.S. Treasury bonds does not mean that they won’t in the future. They just haven’t yet. Frequently, the models are blind to this fact.

In fact, one of Nassim’s most trenchant points is that on the day before whatever “worst case” event happened in the past, you would have not been using the coming “worst case” as your worst case, because it wouldn’t have happened yet.

Here’s an easy illustration. October 19, 1987, the stock market dropped by 22.61%, or 508 points on the Dow Jones Industrial Average. In percentage terms, it was then and remains the worst one-day market drop in U.S. history. It was dubbed “Black Monday.” (Financial writers sometimes lack creativity — there are several other “Black Monday’s” in history.) But here we see Nassim’s point: On October 18, 1987, what would the models use as the worst possible case? We don’t know exactly, but we do know the previous worst case was 12.82%, which happened on October 28, 1929. A 22.61% drop would have been considered so many standard deviations from the average as to be near impossible.

But the tails are very fat in finance — improbable and consequential events seem to happen far more often than they should based on naive statistics. There is also a severe but often unrecognized recursiveness problem, which is that the models themselves influence the outcome they are trying to predict. (To understand this more fully, check out our post on Complex Adaptive Systems.)

A second problem with VAR is that even if we had a vastly more robust dataset, a statistical “confidence interval” does not do the job of financial risk management. Says Taleb:


There is an internal contradiction between measuring risk (i.e. standard deviation) and using a tool [VAR] with a higher standard error than that of the measure itself.

I find that those professional risk managers whom I heard recommend a “guarded” use of the VAR on grounds that it “generally works” or “it works on average” do not share my definition of risk management. The risk management objective function is survival, not profits and losses. A trader according to the Chicago legend, “made 8 million in eight years and lost 80 million in eight minutes”. According to the same standards, he would be, “in general”, and “on average” a good risk manager.

This is like a GPS system that shows you where you are at all times but doesn’t include cliffs. You’d be perfectly happy with your GPS until you drove off a mountain.

It was this type of naive trust of models that got a lot of people in trouble in the recent mortgage crisis. Backward-looking, trend-fitting models, the most common maps of the financial territory, failed by describing a territory that was only a mirage: A world where home prices only went up. (Lewis Carroll would have approved.)

This was navigating Tulsa with a map of Tatooine.

***

The logical response to all this is, “So what?” If our maps fail us, how do we operate in an uncertain world? This is its own discussion for another time, and Taleb has gone to great pains to try and address the concern. Smart minds disagree on the solution. But one obvious key must be building systems that are robust to model error.

The practical problem with a model like VAR is that the banks use it to optimize. In other words, they take on as much exposure as the model deems OK. And when banks veer into managing to a highly detailed, highly confident model rather than to informed common sense, which happens frequently, they tend to build up hidden risks that will un-hide themselves in time.

If one were to instead assume that there were no precisely accurate maps of the financial territory, they would have to fall back on much simpler heuristics. (If you assume detailed statistical models of the future will fail you, you don’t use them.)

In short, you would do what Warren Buffett has done with Berkshire Hathaway. Mr. Buffett, to our knowledge, has never used a computer model in his life, yet manages an institution half a trillion dollars in size by assets, a large portion of which are financial assets. How?

The approach requires not only assuming a future worst case far more severe than the past, but also dictates building an institution with a robust set of backup systems, and margins-of-safety operating at multiple levels. Extra cash, rather than extra leverage. Taking great pains to make sure the tails can’t kill you. Instead of optimizing to a model, accepting the limits of your clairvoyance.

When map and terrain differ, follow the terrain.

The trade-off, of course, is short-run rewards much less great than those available under more optimized models. Speaking of this, Charlie Munger has noted:


Berkshire’s past record has been almost ridiculous. If Berkshire had used even half the leverage of, say, Rupert Murdoch, it would be five times its current size.

For Berkshire at least, the trade-off seems to have been worth it.

***

The salient point then is that in our march to simplify reality with useful models, of which Farnam Street is an advocate, we confuse the models with reality. For many people, the model creates its own reality. It is as if the spreadsheet comes to life. We forget that reality is a lot messier. The map isn’t the territory. The theory isn’t what it describes, it’s simply a way we choose to interpret a certain set of information. Maps can also be wrong, but even if they are essentially correct, they are an abstraction, and abstraction means that information is lost to save space. (Recall the mile-to-mile scale map.)

How do we do better? This is fodder for another post, but the first step is to realize that you do not understand a model, map, or reduction unless you understand and respect its limitations. We must always be vigilant by stepping back to understand the context in which a map is useful, and where the cliffs might lie. Until we do that, we are the turkey.

Sunday, February 3, 2019

Claparede's Pinprick Experiment

In 1911, a French physician and psychologist named Edouard Claparede published his observations of a female amnesiac patient. The woman was suffering from a debilitating form of amnesia that left her incapable of forming new memories. She had suffered localized brain damage that preserved her basic mechanical and reasoning skills, along with most of her older memories. But beyond the duration of a few minutes, the recent past was lost to her—a condition brilliantly captured in the movie Memento, in which a man suffering similar memory loss solves a mystery by furiously scrawling new information on the backs of Polaroids before his memories fade to black.

Claparède's patient would have seemed straight out of a slapstick farce had her condition not been so tragic. Each day the doctor would greet her and run through a series of introductions. If he then left for 15 minutes, she would forget who he was. They'd do the introductions all over again. One day, Claparède decided to vary the routine. He introduced himself to the woman as usual, but when he reached to shake her hand for the first time, he concealed a pin in his palm.

It wasn't friendly, but Claparède was onto something. When he arrived the next day, his patient greeted him with the usual blank welcome—no memory of yesterday's pinprick, no memory of yesterday at all—until Claparède extended his hand. Without being able to explain why, the woman refused to shake. She was incapable of forming new memories, yet she had nevertheless remembered something—a subconscious sense of danger, a remembrance of past trauma. She failed utterly to recognize the face and the voice she'd encountered every day for months. But somehow, buried in her mind, she remembered a threat.


Discover: Fear in the Brain

Experiments on Implicit Memory in a Korsakoff Patient by Clapar ède (1907)
https://www.coursera.org/lecture/introduction-psych/lecture-8-what-is-not-forgotten-14-39-min-lC9MU

Tuesday, January 8, 2019

Future Islands - Like the Moon



Unofficial video for "Like the Moon" by Future Islands. The guy who made the video took an old Russian Science Fiction film and edited it perfectly to the song.