Thursday, June 29, 2017

How the brain does face recognition


This is a beautiful result. IIUC, these neuroscientists use the terminology "face axis" instead of (machine learning terminology) variation along an eigenface vector or feature vector.
Scientific American: ...using a combination of brain imaging and single-neuron recording in macaques, biologist Doris Tsao and her colleagues at Caltech have finally cracked the neural code for face recognition. The researchers found the firing rate of each face cell corresponds to separate facial features along an axis. Like a set of dials, the cells are fine-tuned to bits of information, which they can then channel together in different combinations to create an image of every possible face. “This was mind-blowing,” Tsao says. “The values of each dial are so predictable that we can re-create the face that a monkey sees, by simply tracking the electrical activity of its face cells.”
I never believed the "Jennifer Aniston neuron" results, which seemed implausible from a neural architecture perspective. I thought the encoding had to be far more complex and modular. Apparently that's the case. The single neuron claim has been widely propagated (for over a decade!) but now seems to be yet another result that fails to replicate after invading the meme space of credulous minds.
... neuroscientist Rodrigo Quian Quiroga found that pictures of actress Jennifer Aniston elicited a response in a single neuron. And pictures of Halle Berry, members of The Beatles or characters from The Simpsons activated separate neurons. The prevailing theory among researchers was that each neuron in the face patches was sensitive to a few particular people, says Quiroga, who is now at the University of Leicester in the U.K. and not involved with the work. But Tsao’s recent study suggests scientists may have been mistaken. “She has shown that neurons in face patches don’t encode particular people at all, they just encode certain features,” he says. “That completely changes our understanding of how we recognize faces.”
Modular feature sensitivity -- just like in neural net face recognition:
... To decipher how individual cells helped recognize faces, Tsao and her postdoc Steven Le Chang drew dots around a set of faces and calculated variations across 50 different characteristics. They then used this information to create 2,000 different images of faces that varied in shape and appearance, including roundness of the face, distance between the eyes, skin tone and texture. Next the researchers showed these images to monkeys while recording the electrical activity from individual neurons in three separate face patches.

All that mattered for each neuron was a single-feature axis. Even when viewing different faces, a neuron that was sensitive to hairline width, for example, would respond to variations in that feature. But if the faces had the same hairline and different-size noses, the hairline neuron would stay silent, Chang says. The findings explained a long-disputed issue in the previously held theory of why individual neurons seemed to recognize completely different people.

Moreover, the neurons in different face patches processed complementary information. Cells in one face patch—the anterior medial patch—processed information about the appearance of faces such as distances between facial features like the eyes or hairline. Cells in other patches—the middle lateral and middle fundus areas—handled information about shapes such as the contours of the eyes or lips. Like workers in a factory, the various face patches did distinct jobs, cooperating, communicating and building on one another to provide a complete picture of facial identity.

Once Chang and Tsao knew how the division of labor occurred among the “factory workers,” they could predict the neurons’ responses to a completely new face. The two developed a model for which feature axes were encoded by various neurons. Then they showed monkeys a new photo of a human face. Using their model of how various neurons would respond, the researchers were able to re-create the face that a monkey was viewing. “The re-creations were stunningly accurate,” Tsao says. In fact, they were nearly indistinguishable from the actual photos shown to the monkeys.
This is the original paper in Cell:
The Code for Facial Identity in the Primate Brain

Le Chang, Doris Y. Tsao

Highlights
•Facial images can be linearly reconstructed using responses of ∼200 face cells
•Face cells display flat tuning along dimensions orthogonal to the axis being coded
•The axis model is more efficient, robust, and flexible than the exemplar model
•Face patches ML/MF and AM carry complementary information about faces

Summary
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.
200 cells is interesting because (IIRC) standard deep learning face recognition packages right now use a 126-dimensional feature space. These packages perform roughly as well as humans (or perhaps a bit better?).

Monday, June 26, 2017

Face Recognition applied at scale in China



The Chinese government is not the only entity that has access to millions of faces + identifying information. So do Google, Facebook, Instagram, and anyone who has scraped information from similar social networks (e.g., US security services, hackers, etc.).

In light of such ML capabilities it seems clear that anti-ship ballistic missiles can easily target a carrier during the final maneuver phase of descent, using optical or infrared sensors (let alone radar).
Terminal targeting of a moving aircraft carrier by an ASBM like the DF21D

Simple estimates: 10 min flight time means ~10km uncertainty in final position of a carrier (assume speed of 20-30 mph) initially located by satellite. Missile course correction at distance ~10km from target allows ~10s (assuming Mach 5-10 velocity) of maneuver, and requires only a modest angular correction. At this distance a 100m sized target has angular size ~0.01 so should be readily detectable from an optical image. (Carriers are visible to the naked eye from space!) Final targeting at distance ~km can use a combination of optical / IR / radar that makes countermeasures difficult.

So hitting a moving aircraft carrier does not seem especially challenging with modern technology.

Friday, June 23, 2017

The Prestige: Are You Watching Closely?



2016 was the 10th anniversary of The Prestige, one of the most clever films ever made. This video reveals aspects of the movie that will be new even to fans who have watched it several times. Highly recommended!
Wikipedia: The Prestige is a 2006 British-American mystery thriller film directed by Christopher Nolan, from a screenplay adapted by Nolan and his brother Jonathan from Christopher Priest's 1995 novel of the same name. Its story follows Robert Angier and Alfred Borden, rival stage magicians in London at the end of the 19th century. Obsessed with creating the best stage illusion, they engage in competitive one-upmanship with tragic results. The film stars Hugh Jackman as Robert Angier, Christian Bale as Alfred Borden, and David Bowie as Nikola Tesla. It also stars Michael Caine, Scarlett Johansson, Piper Perabo, Andy Serkis, and Rebecca Hall.
See also Feynman and Magic -- Feynman was extremely good at reverse-engineering magic tricks.

Sunday, June 18, 2017

Destined for War? America, China, and the Thucydides Trap



Graham Allison was Dean of the Kennedy School of Government at Harvard and Assistant Secretary of Defense under Clinton. I also recommend his book on Lee Kuan Yew.

Thucydides: “It was the rise of Athens, and the fear that this inspired in Sparta, that made war inevitable.” More here and here.
Destined for War: Can America and China Escape Thucydides’s Trap?

In Destined for War, the eminent Harvard scholar Graham Allison explains why Thucydides’s Trap is the best lens for understanding U.S.-China relations in the twenty-first century. Through uncanny historical parallels and war scenarios, he shows how close we are to the unthinkable. Yet, stressing that war is not inevitable, Allison also reveals how clashing powers have kept the peace in the past — and what painful steps the United States and China must take to avoid disaster today.
At 1h05min Allison answers the following question.
Is there any reason for optimism under President Trump in foreign affairs?

65:43 Harvard
65:50 and Cambridge ... ninety-five
66:04 percent of whom voted [against Trump] ... so we
66:08 hardly know any people in quote real
66:11 America and we don't have any perception
66:15 or understanding or feeling for this but
66:17 I come from North Carolina and my wife
66:19 comes from Ohio ...
66:33 ... in large parts of the
66:36 country they have extremely different
66:38 views than the New York Times or The
66:39 Washington Post or you know the elite
66:43 media ...

I think part of what Trump
67:11 represents is a rejection of the
67:15 establishment especially the political
67:17 class and the elites
67:19 which are places like us places like
67:21 Harvard and others who lots of people in
67:25 our society don't think have done a
67:27 great job with the opportunities that
67:28 our country has had

67:33 ... Trump's willingness to not be
67:37 Orthodox to not be captured by the
67:40 conventional wisdom to explore
67:43 possibilities

... he's not
68:31 beholden to the Jewish community he's
68:34 not beholden to the Republican Party
68:36 he's not become beholden to the
68:38 Democratic Party

... I think I'm
69:26 hopeful

See also:
Everything Under the Heavens and China's Conceptualization of Power
Thucydides Trap, China-US relations, and all that

Friday, June 16, 2017

Scientific Consensus on Cognitive Ability?


From the web site of the International Society for Intelligence Research (ISIR): a summary of the recent debate involving Charles Murray, Sam Harris, Richard Nisbett, Eric Turkheimer, Paige Harden, Razib Khan, Bo and Ben Winegard, Brian Boutwell, Todd Shackelford, Richard Haier, and a cast of thousands! ISIR is the main scientific society for researchers of human intelligence, and is responsible for the Elsevier journal Intelligence.

If you click through to the original, there are links to resources in this debate ranging from podcasts (Harris and Murray), to essays at Vox, Quillette, etc.

I found the ISIR summary via a tweet by Timothy Bates, who sometimes comments here. I wonder what he has to say about all this, given that his work has been cited by both sides :-)
TALKING ABOUT COGNITIVE ABILITY IN 2017

[ Click through for links. ]

2017 has already seen more science-lead findings on cognitive ability, and public discussion about the origins, and social and moral implications of ability, than we have had in some time, which should be good news for those seeking to understand and grow cognitive ability. This post brings together some of these events linking talk about differences in reasoning that are so near to our sense of autonomy and identity.

Middlebury
Twenty years ago, when Dr Charles Murray co-authored a book with Harvard Psychologist Richard Herrnstein he opened up a conversation about the role of ability in the fabric of society, and in the process made him famous for several things (most of which that he didn‘t say), but for which he, and that book – The Bell Curve – came to act as lightning rods, for the cauldron of mental compression of complex ideas, multiple people, into simpler slogans. 20 years on, Middlebury campus showed this has made even speaking to a campus audience fraught with danger.

Waking Up
In the wake of this disrupted meeting, Sam Harris interviewed Dr Murray in a podcast listened (and viewed on youtube) by and audience of many thousands, creating a new audience and new interest in ideas about ability, its measurement and relevance to modern society.

Vox populi
The Harris podcast lead a response in turn, published in Vox in which IQ, genetics, and social psychology experts Professors Eric Turkheimer, Paige Harden, and Richard Nisbett responded critically to the ideas raised (and those not raised) which they argue are essential for informed debate on group differences.

Quillette
And that lead in turn lead to two more responses: First by criminologists and evolutionary psychologists Bo and Ben Winegard, Brian Boutwell, and Todd Shackelford in Quillette, and a second post at Quillette, also supportive of the Murray-Harris interaction, from past-president of ISIR and expert intelligence research Professor Rich Haier.

And that lead to a series of planned essays by Professor Harden (first of which is now published here) and Eric Turkheimer (here). Each of these posts contains a wealth of valuable information, links to original papers, and they are responsive to each other: Addressing points made in the other posts with citations, clarifications, and productive disagreement where that still exists. They’re worth reading.

The answer, in 2017, may be a cautious “Yes, – perhaps we can talk about differences in human cognitive ability”. And listen, reply, and perhaps even reach a scientific consensus.

[ Added: 6/15 Vox response from Turkheimer et al. that doesn't appear to be noted in the ISIR summary. ]
In a recent post, NYTimes: In ‘Enormous Success,’ Scientists Tie 52 Genes to Human Intelligence, I noted that scientific evidence overwhelmingly supports the following claims:
0. Intelligence is (at least crudely) measurable
1. Intelligence is highly heritable (much of the variance is determined by DNA)
2. Intelligence is highly polygenic (controlled by many genetic variants, each of small effect)
3. Intelligence is going to be deciphered at the molecular level, in the near future, by genomic studies with very large sample size
I believe that, perhaps modulo the word near in #3, every single listed participant in the above debate would agree with these claims.

(0-3) above take no position on the genetic basis of group differences in measured cognitive ability. That is the where most of the debate is focused. However, I think it's fair to say that points (0-3) form a consensus view among leading experts in 2017.

As far as what I think the future will bring, see Complex Trait Adaptation and the Branching History of Mankind.

Thursday, June 15, 2017

Everything Under the Heavens and China's Conceptualization of Power



Howard French discusses his new book, Everything Under the Heavens: How the Past Helps Shape China's Push for Global Power, with Orville Schell. The book is primarily focused on the Chinese historical worldview and how it is likely to affect China's role in geopolitics.

French characterizes his book as, in part,
... an extended exploration of the history of China's conceptualization of power ... and a view as to how ... the associated contest with the United States for primacy ... in the world could play out.
These guys are not very quantitative, so let me clarify a part of their discussion that was left rather ambiguous. It is true that demographic trends are working against China, which has a rapidly aging population. French and Schell talk about a 10-15 year window during which China has to grow rich before it grows old (a well-traveled meme). From the standpoint of geopolitics this is probably not the correct or relevant analysis. China's population is ~4x that of the US. If, say, demographic trends limit this to only an effective 3x or 3.5x advantage in working age individuals, China still only has to reach ~1/3 of US per capita income in order to have a larger overall economy. It seems unlikely that there is any hard cutoff preventing China from reaching, say, 1/2 the US per capita GDP in a few decades. (Obviously a lot of this growth is still "catch-up" growth.) At that point its economy would be the largest in the world by far, and its scientific-technological workforce and infrastructure would be far larger than that of any other country.




Gideon Rachman writes for the FT, so it's not surprising that his instincts seem a bit stronger when it comes to economics. He makes a number of incisive observations during this interview.

At 16min, he mentions that
I was in Beijing about I guess a month before the vote [US election], in fact when the first debates were going on, and the Chinese, I thought that official Chinese [i.e. Government Officials] in our meeting and the sort of semi-official academics were clearly pulling for Trump.
See also Trump Triumph Viewed From China.

Related: Thucydides trap, China-US relations, and all that.

Tuesday, June 13, 2017

Climate Risk and AI Risk for Dummies

The two figures below come from recent posts on climate change and AI. Please read them.

The squiggles in the first figure illustrate uncertainty in how climate will change due to CO2 emissions. The squiggles in the second figure illustrate uncertainty in the advent of human-level AI.



Many are worried about climate change because polar bears, melting ice, extreme weather, sacred Gaia, sea level rise, sad people, etc. Many are worried about AI because job loss, human dignity, Terminator, Singularity, basilisks, sad people, etc.

You can choose to believe in any of the grey curves in the AI graph because we really don't know how long it will take to develop human level AI, and AI researchers are sort of rational scientists who grasp uncertainty and epistemic caution.

You cannot choose to believe in just any curve in a climate graph because if you pick the "wrong" curve (e.g., +1.5 degree Celsius sensitivity to a doubling of CO2, which is fairly benign, but within the range of IPCC predictions) then you are a climate denier who hates science, not to mention a bad person :-(

Oliver Stone confronts Idiocracy



See earlier post Trump, Putin, Stephen Cohen, Brawndo, and Electrolytes.

Note to morons: Russia's 2017 GDP is less than that of France, Brazil, Italy, Canada, and just above that of Korea and Australia. (PPP-adjusted they are still only #6 in the world, between Germany and Indonesia: s-s-scary!) Apart from their nuclear arsenal (which they will struggle to pay for in the future), they are hardly a serious geopolitical competitor to the US and certainly not to the West as a whole. Relax! Trump won the election, not Russia.


This is a longer (and much better) discussion of Putin with Oliver Stone and Stephen Cohen. At 17:30 they discuss the "Russian attack" on our election.

Sunday, June 11, 2017

Rise of the Machines: Survey of AI Researchers


These predictions are from a recent survey of AI/ML researchers. See SSC and also here for more discussion of the results.
When Will AI Exceed Human Performance? Evidence from AI Experts

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.
Another figure:


Keep in mind that the track record for this type of prediction, even by experts, is not great:


See below for the cartoon version :-)



Wednesday, June 07, 2017

Complex Trait Adaptation and the Branching History of Mankind

Note Added in response to 2020 Twitter mob attack which attempts to misrepresent my views:

This is not my research. The authors are affiliated with Columbia University and the New York Genome Center.

I do not work on evolutionary history or signals of recent natural selection, but I defend the freedom of other researchers to investigate it.

One has to make a big conceptual leap to claim this research implies group differences. The fact that a certain set of genetic variants has been under selection does not necessarily imply anything about overall differences in phenotype between populations. Nevertheless the work is interesting and sheds some light on natural selection in deep human history.

Racist inferences based on the results of the paper are the fault of the reader, not the authors of the paper or of this blog.




A new paper (94 pages!) investigates signals of recent selection on traits such as height and educational attainment (proxy for cognitive ability). Here's what I wrote about height a few years ago in Genetic group differences in height and recent human evolution:
These recent Nature Genetics papers offer more evidence that group differences in a complex polygenic trait (height), governed by thousands of causal variants, can arise over a relatively short time (~ 10k years) as a result of natural selection (differential response to varying local conditions). One can reach this conclusion well before most of the causal variants have been accounted for, because the frequency differences are found across many variants (natural selection affects all of them). Note the first sentence above contradicts many silly things (drift over selection, genetic uniformity of all human subpopulations due to insufficient time for selection, etc.) asserted by supposed experts on evolution, genetics, human biology, etc. over the last 50+ years. The science of human evolution has progressed remarkably in just the last 5 years, thanks mainly to advances in genomic technology.

Cognitive ability is similar to height in many respects, so this type of analysis should be possible in the near future. ...
The paper below conducts an allele frequency analysis on admixture graphs, which contain information about branching population histories. Thanks to recent studies, they now have enough data to run the analysis on educational attainment as well as height. Among their results: a clear signal that modern East Asians experienced positive selection (~10kya?) for + alleles linked to educational attainment (see left panel of figure above; CHB = Chinese, CEU = Northern Europeans). These variants have also been linked to neural development.
Detecting polygenic adaptation in admixture graphs

Fernando Racimo∗1, Jeremy J. Berg2 and Joseph K. Pickrell1,2 1New York Genome Center, New York, NY 10013, USA 2Department of Biological Sciences, Columbia University, New York, NY 10027, USA June 4, 2017

Abstract
An open question in human evolution is the importance of polygenic adaptation: adaptive changes in the mean of a multifactorial trait due to shifts in allele frequencies across many loci. In recent years, several methods have been developed to detect polygenic adaptation using loci identified in genome-wide association studies (GWAS). Though powerful, these methods suffer from limited interpretability: they can detect which sets of populations have evidence for polygenic adaptation, but are unable to reveal where in the history of multiple populations these processes occurred. To address this, we created a method to detect polygenic adaptation in an admixture graph, which is a representation of the historical divergences and admixture events relating different populations through time. We developed a Markov chain Monte Carlo (MCMC) algorithm to infer branch-specific parameters reflecting the strength of selection in each branch of a graph. Additionally, we developed a set of summary statistics that are fast to compute and can indicate which branches are most likely to have experienced polygenic adaptation. We show via simulations that this method - which we call PhenoGraph - has good power to detect polygenic adaptation, and applied it to human population genomic data from around the world. We also provide evidence that variants associated with several traits, including height, educational attainment, and self-reported unibrow, have been influenced by polygenic adaptation in different human populations.

https://doi.org/10.1101/146043
From the paper:
We find evidence for polygenic adaptation in East Asian populations at variants that have been associated with educational attainment in European GWAS. This result is robust to the choice of data we used (1000 Genomes or Lazaridis et al. (2014) panels). Our modeling framework suggests that selection operated before or early in the process of divergence among East Asian populations - whose earliest separation dates at least as far back as approximately 10 thousand years ago [42, 43, 44, 45] - because the signal is common to different East Asian populations (Han Chinese, Dai Chinese, Japanese, Koreans, etc.). The signal is also robust to GWAS ascertainment (Figure 6), and to our modeling assumptions, as we found a significant difference between East Asian and non- East-Asian populations even when performing a simple binomial sign test (Tables S4, S9, S19 and S24).

Sunday, June 04, 2017

Epistemic Caution and Climate Change

[ UPDATE: See 2019 post: Certainties and Uncertainties in our Energy and Climate Futures: Steve Koonin ]

I have not, until recently, invested significant time in trying to understand climate modeling. These notes are primarily for my own use, however I welcome comments from readers who have studied this issue in more depth.

I take a dim view of people who express strong opinions about complex phenomena without having understood the underlying uncertainties. I have yet to personally encounter anyone who claims to understand all of the issues discussed below, but I constantly meet people with strong views about climate change.

See my old post on epistemic caution Intellectual honesty: how much do we know?
... when it comes to complex systems like society or economy (and perhaps even climate), experts have demonstrably little predictive power. In rigorous studies, expert performance is often no better than random.  
... worse, experts are usually wildly overconfident about their capabilities. ... researchers themselves often have beliefs whose strength is entirely unsupported by available data.
Now to climate and CO2. AFAIU, the direct heating effect due to increasing CO2 concentration is only a logarithmic function (all the absorption is in a narrow frequency band). The main heating effects in climate models come from secondary effects such as water vapor distribution in the atmosphere, which are not calculable from first principles, nor under good experimental/observational control. Certainly any "catastrophic" outcomes would have to result from these secondary feedback effects.

The first paper below gives an elementary calculation of direct effects from atmospheric CO2. This is the "settled science" part of climate change -- it depends on relatively simple physics. The prediction is about 1 degree Celsius of warming from a doubling of CO2 concentration. Anything beyond this is due to secondary effects which, in their totality, are not well understood -- see second paper below, about model tuning, which discusses rather explicitly how these unknowns are dealt with.
Simple model to estimate the contribution of atmospheric CO2 to the Earth’s greenhouse effect
Am. J. Phys. 80, 306 (2012)
http://dx.doi.org/10.1119/1.3681188

We show how the CO2 contribution to the Earth’s greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the “climate sensitivity” (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere’s temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.
From Conclusions:
... The question of feedbacks, in its broadest sense, is the whole question of climate change: namely, how much and in which way can we expect the Earth to respond to an increase of the average surface temperature of the order of 1 degree, arising from an eventual doubling of the concentration of CO2 in the atmosphere? And what further changes in temperature may result from this response? These are, of course, questions for climate scientists to resolve. ...
The paper below concerns model tuning. It should be apparent that there are many adjustable parameters hidden in any climate model. One wonders whether the available data, given its own uncertainties, can constrain this high dimensional parameter space sufficiently to produce predictive power in a rigorous statistical sense.

The first figure below illustrates how different choices of these parameters can affect model predictions. Note the huge range of possible outcomes! The second figure below illustrates some of the complex physical processes which are subsumed in the parameter choices. Over longer timescales, (e.g., decades) uncertainties such as the response of ecosystems (e.g., plant growth rates) to increased CO2 would play a role in the models. It is obvious that we do not (may never?) have control over these unknowns.
THE ART AND SCIENCE OF CLIMATE MODEL TUNING

AMERICAN METEOROLOGICAL SOCIETY MARCH 2017 | 589

... Climate model development is founded on well-understood physics combined with a number of heuristic process representations. The fluid motions in the atmosphere and ocean are resolved by the so-called dynamical core down to a grid spacing of typically 25–300 km for global models, based on numerical formulations of the equations of motion from fluid mechanics. Subgrid-scale turbulent and convective motions must be represented through approximate subgrid-scale parameterizations (Smagorinsky 1963; Arakawa and Schubert 1974; Edwards 2001). These subgrid-scale parameterizations include coupling with thermodynamics; radiation; continental hydrology; and, optionally, chemistry, aerosol microphysics, or biology.

Parameterizations are often based on a mixed, physical, phenomenological and statistical view. For example, the cloud fraction needed to represent the mean effect of a field of clouds on radiation may be related to the resolved humidity and temperature through an empirical relationship. But the same cloud fraction can also be obtained from a more elaborate description of processes governing cloud formation and evolution. For instance, for an ensemble of cumulus clouds within a horizontal grid cell, clouds can be represented with a single-mean plume of warm and moist air rising from the surface (Tiedtke 1989; Jam et al. 2013) or with an ensemble of such plumes (Arakawa and Schubert 1974). Similar parameterizations are needed for many components not amenable to first-principle approaches at the grid scale of a global model, including boundary layers, surface hydrology, and ecosystem dynamics. Each parameterization, in turn, typically depends on one or more parameters whose numerical values are poorly constrained by first principles or observations at the grid scale of global models. Being approximate descriptions of unresolved processes, there exist different possibilities for the representation of many processes. The development of competing approaches to different processes is one of the most active areas of climate research. The diversity of possible approaches and parameter values is one of the main motivations for model inter-comparison projects in which a strict protocol is shared by various modeling groups in order to better isolate the uncertainty in climate simulations that arises from the diversity of models (model uncertainty). ...

... All groups agreed or somewhat agreed that tuning was justified; 91% thought that tuning global-mean temperature or the global radiation balance was justified (agreed or somewhat agreed). ... the following were considered acceptable for tuning by over half the respondents: atmospheric circulation (74%), sea ice volume or extent (70%), and cloud radiative effects by regime and tuning for variability (both 52%).






Here is Steve Koonin, formerly Obama's Undersecretary for Science at DOE and a Caltech theoretical physicist, calling for a "Red Team" analysis of climate science, just a few months ago (un-gated link):
WSJ: ... The outcome of a Red/Blue exercise for climate science is not preordained, which makes such a process all the more valuable. It could reveal the current consensus as weaker than claimed. Alternatively, the consensus could emerge strengthened if Red Team criticisms were countered effectively. But whatever the outcome, we scientists would have better fulfilled our responsibilities to society, and climate policy discussions would be better informed.

Note Added: In 2014 Koonin ran a one day workshop for the APS (American Physical Society), inviting six leading climate scientists to present their work and engage in an open discussion. The APS committee responsible for reviewing the organization's statement on climate change were the main audience for the discussion. The 570+ page transcript, which is quite informative, is here. See Physics Today coverage, and an annotated version of Koonin's WSJ summary.

Below are some key questions Koonin posed to the panelists in preparation for the workshop. After the workshop he declared that The idea that “Climate science is settled” runs through today’s popular and policy discussions. Unfortunately, that claim is misguided.
The estimated equilibrium climate sensitivity to CO2 has remained between 1.5 and 4.5 in the IPCC reports since 1979, except for AR4 where it was given as 2-5.5.

What gives rise to the large uncertainties (factor of three!) in this fundamental parameter of the climate system?

How is the IPCC’s expression of increasing confidence in the detection/attribution/projection of anthropogenic influences consistent with this persistent uncertainty?

Wouldn’t detection of an anthropogenic signal necessarily improve estimates of the response to anthropogenic perturbations?
I seriously doubt that the process by which the 1.5 to 4.5 range is computed is statistically defensible. From the transcript, it appears that IPCC results of this kind are largely the result of "Expert Opinion" rather than a specific computation! It is rather curious that the range has not changed in 30+ years, despite billions of dollars spent on this research. More here.

Saturday, June 03, 2017

Python Programming in one video



Putting this here in hopes I can get my kids to watch it at some point 8-)

Please recommend similar resources in the comments!

Blog Archive

Labels