How did Jamaica get to be so dangerous?

I’ve been listening to Goldeneye: Where Bond Was Born: Ian Fleming’s Jamaica, in which the 1950s version of the island is described as a paradise to which film stars and billionaires (adjusted to post-Biden $$) seek to escape. A fabulous oceanfront estate could be purchased and/or built for thousands of British pounds. Fleming went there for two months every year, first to relax and then to write. Wikipedia:

When Jamaica gained independence in 1962, the murder rate was 3.9 per 100,000 inhabitants, one of the lowest in the world. In 2005, Jamaica had 1,674 murders, for a murder rate of 58 per 100,000 people, the highest murder rate in the world.

Today’s question is how this happened. Could it be overpopulation versus a fixed set of resources? The following chart (source) shows what should have been manageable growth, from 1.6 million to 2.75 million, 1960 to the present:

What about poverty? That’s often blamed for crime. The World Bank says the country has gotten richer, per-capita (this is adjusted for inflation (“current US$”)):

Maybe it is guns? We don’t have people killing people here in the U.S. We have “gun violence”. But the book describes guns as having been readily available in Jamaica in the 1950s (Fleming owned an assortment, for example).

Readers: Have you been to Jamaica? What’s it like for tourists? Are they mostly in resorts that are walled-off from the locals?

Full post, including comments

Urban riots predictable after lockdown?

Loyal readers of this blog (i.e., both of you!) will recall that I have regularly asked whether the lockdown cure is worse than the coronavirus disease. I anticipated deaths in the U.S. due to the shutdown of health care for non-Covid issues, due to poverty and unemployment, due to the shutdown of clinical trials for new/improved medicines, and due to the shutdown of clinical training for medical doctors (post). I anticipated a vast number of deaths in poor countries that were our trading partners.

I did not anticipate civil unrest and the destruction of American cities, but of course in hindsight it seems obvious that locking the poorest Americans into their crummy tiny urban apartments for months, while taking away jobs from most of those who formerly worked, would lead to them eventually emerging and entertaining themselves in ways that wouldn’t be entertaining for the mansion-dwelling governors who ordered the lockdowns. (see “Your lockdown may vary”)

Police departments in the U.S. murder citizens on a regular basis (and why not, since they are generally immune from being fired). The typical police murder does not bother too many Americans or even make the news. This one was unusually disturbing and unusually thoroughly documented on video, of course, but I still don’t think it would have been enough to trigger nationwide riots back in, say, 2019.

In addition to the lockdown itself having put non-mansion-dwelling Americans into a bad mood, I wonder if the lockdown created a general environment of lawlessness. Unlike in Sweden, for example, Americans were told that everything had changed due to the killer virus and therefore their Constitutional rights were inoperative. Since the old laws didn’t apply to the government, maybe the old laws against looting didn’t apply to the subjects?

Is it fair to say that a lot of Americans actually did anticipate this kind of breakdown of society? There was a huge run on guns and ammo back in March, right? I discovered that several of my friends had become new gun owners. These included female physicians in their 40s, for example, living alone in cities. I scoffed at them, saying that the militarized U.S. police state would keep the ghetto-dwellers quietly imprisoned, watching TV while consuming alcohol and opioids purchased via Medicaid.

Readers: Were these riots easy to foresee?

Bonus: Some pictures from a recent helicopter trip over Dover, Massachusetts. #WeAreAllinThisTogether #StayHomeSaveLives

(The house is at 36 Farm Street. Trulia says that the annual property tax is $141,000 per year, i.e., not enough to pay the pension for one retired senior police officer or school administrator. It may belong to Kevin Rollins, former CEO of Dell.)

Related:

Full post, including comments

Self-partnered versus Cat-partnered

A (female-identifying) reporter on Facebook:

Emma Watson says she doesn’t like the term single and prefers “self partnering.” this sounds empowering to me–how does it strike you? Let me know for a possible [newspaper] article?

(Under California family law, there are only a handful of people in the world whom the high-income, high-wealth Ms. Watson could marry and not expose herself to alimony and child support lawsuits. See “Burning Man: Attitudes toward marriage and children”:

We had a lot of high-income women in our camp. All recognized that they could be targeted and potentially become the loser under California’s winner-take-all system. A medical professional said “There is no way that I’m going to pay to support a guy. It was bad enough the last time that I lived with a boyfriend and I had to pick up his socks all the time and do his laundry. Thank God I didn’t have to support him financially.” A finance executive said “I worked my ass off for 17 years for what I have. I am not going to risk losing it.”

If Emma Watson gets sued by a husband in her native England, she could lose half of her accumulated fortune after one or two years of marriage (prenuptial agreements are not enforced by the courts there).)

I’m not sure why at least some Americans who identify as women think that “self-partnered” is more “empowered” than simply “single,” but I wonder if a person with a lot of cats could be considered “cat-partnered”.

Full post, including comments

Low fertility among middle-class whites explained…

…. by a white middle-class American. A Facebook post from a woman in her 30s who was last seen (by me) thoroughly enjoying Burning Man:

I decided recently to get clear on what my future self would like to create now for my life. Though I’ve enjoyed the wild ride of the single life and being foot loose and fancy free most of my adult life. I decided that slowing down to focus on building a relationship with my life partner was going to be my biggest priority. I definitely had to do some internal housekeeping to create space for this person to show up in my life and he sure did! I’m so happy to have found this man and our time together has been nothing short of awesome! He’s not on social media (which I find refreshing) so he will never see this, but I’m so thankful to have met him and I’m so excited for the life we plan to build together and I look forward to sharing this journey with you all. 💕

I.e., in her 20s she did not have the idea of an enduring partnership with a man and therefore, presumably, no children were planned (not sure she realized that having sex with an already-married dentist in Massachusetts would have paid a lot better than her job in education!)

If children are the outcome of this “life partnership,” the first won’t arrive until mom is at what biologists would say is the sunset of her fertility. We have the cold data on the low fertility of middle-income white Americans:

But via Facebook we have an explanation of how this occurs. (Remember that there are few Americans with incomes of $200,000+/year and therefore, despite what you might infer from a casual glance at the chart, the future population will be increasingly descended from people earning less than $50,000/year.)

Full post, including comments

In 20 years, will anyone roll the dice on a naturally conceived child?

A human parent’s biggest fear is having a child with a genetic disorder (though the most commonly expressed fear on Facebook is of Donald Trump winning a second term!).

Technology is bringing us pretty close to eliminating this fear and giving us a bewildering array of options.

She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity by Carl Zimmer reminds the reader of the diversity in sperm and egg cells due to meiosis:

In men, meiosis takes place within a labyrinth of tubes coiled within the testicles. The tube walls are lined with sperm precursor cells, each carrying two copies of each chromosome—one from the man’s mother, the other from his father. When these cells divide, they copy all their DNA, so that now they have four copies of each chromosome. Rather than drawing apart from each other, however, the chromosomes stay together. A maternal and paternal copy of each chromosome line up alongside each other. Proteins descend on them and slice the chromosomes, making cuts at precisely the same spots. As the cell repairs these self-inflicted wounds, a remarkable exchange can take place. A piece of DNA from one chromosome may get moved to the same position in the other, its own place taken by its counterpart. This molecular surgery cannot be rushed. All told, a cell may need three weeks to finish meiosis. Once it’s done, its chromosomes pull away from each other. The cell then divides twice, to make four new sperm cells. Each of the four cells inherits a single copy of all twenty-three chromosomes. But each sperm cell contains a different assembly of DNA. One source of this difference comes from how the pairs of chromosomes get separated. A sperm might contain the version of chromosome 1 that a man inherited from his father, chromosome 2 from his mother, and so on. Another sperm might have a different combination. At the same time, some chromosomes in a sperm are hybrids. Thanks to meiosis, a sperm cell’s copy of chromosome 1 might be a combination of DNA from both his mother and father.

A particular child of two parents, therefore, is just one choice from a near-infinite array of genetic possibilities assembled from the four grandparents. That’s what comes out when a baby is conceived naturally. What if parents were given the opportunity to choose from hundreds of possible outcomes?

In 2012, the Japanese biologist Katsuhiko Hayashi managed to coax induced pluripotent stem cells to develop into the progenitors of eggs. If he implanted them in the ovaries of female mice, they could finish maturing. Over the next few years, Hayashi perfected the procedure, transforming mouse skin cells into eggs entirely in a dish. When he fertilized the eggs, some of them developed into healthy mouse pups. Other researchers have figured out how to make sperm from skin cells taken from adult mice.

Nevertheless, the success that Yamanaka and other researchers have had with animals is grounds for optimism—or worry, depending on what you think about how we might make use of this technology. It’s entirely possible that, before long, scientists will learn how to swab the inside of people’s cheeks and transform their cells into sperm or eggs, ready for in vitro fertilization. If scientists can perfect this process—called in vitro gametogenesis—it will probably be snapped up by fertility doctors. Harvesting mature eggs from women remains a difficult, painful undertaking. It would be far easier for women to reprogram one of their skin cells into an egg. It would also mean that both women and men who can’t make any sex cells at all wouldn’t need a donor to have a child.

Today, parents who use in vitro fertilization can choose from about half a dozen embryos. In vitro gametogenesis might offer them a hundred or more. Shuffling combinations of genes together so many times could produce a much bigger range of possibilities.

But the implications of in vitro gametogenesis go far beyond these familiar scenarios—to ones that Hermann Muller never would have thought of. Induced pluripotent stem cells have depths of possibilities that scientists have just started to investigate. Men, for instance, might be able to produce eggs. A homosexual couple might someday be able to combine gametes, producing children who inherited DNA from both of them. One man might produce both eggs and sperm, combining them to produce a family—not a family of clones, but one in which each child draws a different combination of alleles. It would give the term single-parent family a whole new meaning.

Here’s a yet more science fiction-y possibility… The highest fertility among Americans is in the lowest income mothers, i.e., those who are on welfare. The government will be paying for 100 percent of the costs of any children produced by these mothers: housing, health care, food, education, etc. Once grown up, these children are likely to be low earners and therefore on welfare themselves (see The Son Also Rises). What if the government begins to run out of borrowing capacity and decides that it needs to fund future taxpayers, not future welfare recipients? The tendency to work and pay taxes is as heritable as anything else. So the government offers financial inducements to mothers who agree to abort children conceived with low-income men and instead incubate embryos provided by the government. Said embryos to be carefully screened such that the moms are almost guaranteed to have a physically and mentally healthy child and the government is almost guaranteed to get an adult that enjoys working and paying taxes.

Readers: What do you think? In 2040 or 2050 will there be anyone willing to roll the genetic dice by having sex and seeing what kind of baby comes out?

Full post, including comments

Science is Settled: one characteristic cannot be inherited genetically

She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity by Carl Zimmer says that almost everything is heritable and that genetics is the mechanism for heritability. However, there is one big exception… intelligence.

Why does this matter? The book reminds us that the idea that a lack of intelligence will render a person dependent on welfare goes back at least to the 1930s:

The Great Depression was reaching its depths when [Henry Herbert] Goddard came back to Vineland, and he blamed it largely on America’s lack of intelligence: Most of the newly destitute didn’t have the foresight to save enough money. “Half of the world must take care of the other half,” Goddard said.

The idea that intelligence could not be explained by heredity is similarly old:

[British doctor Lionel] Penrose entered the profession as a passionate critic of eugenics, dismissing it as “pretentious and absurd.” In the early 1930s, eugenics still had a powerful hold on both doctors and the public at large—a situation Penrose blamed on lurid tales like The Kallikak Family. While those stories might be seductive, eugenicists made a mess of traits like intelligence. They were obsessed with splitting people into two categories—healthy and feebleminded—and then they would cast the feebleminded as a “class of vast and dangerous dimensions.” Penrose saw intelligence as a far more complex trait. He likened intelligence to height: In every population, most people were close to average height, but some people were taller and shorter than average. Just being short wasn’t equivalent to having some kind of a height disease. Likewise, people developed a range of different mental aptitudes. Height, Penrose observed, was the product of both inherited genes and upbringing. He believed the same was true for intelligence. Just as Mendelian variants could cause dwarfism, others might cause severe intellectual developmental disorders. But that was no reason to leap immediately to heredity as an explanation. “That mental deficiency may be to some extent due to criminal parents’ dwelling ‘habitually’ in slums seems to have been overlooked,” Penrose said. He condemned the fatalism of eugenicists, as they declared “there was nothing to be done but to blame heredity and advocate methods of extinction.”

Even if a country did sterilize every feebleminded citizen, Penrose warned, the next generation would have plenty of new cases from environmental causes. “The first consideration in the prevention of mental deficiency is to consider how environmental influences which are held responsible can be modified,” Penrose declared.

The author finds some cases in which children with severe physical disorders, e.g., PKU, have impaired intelligence. From this he reminds us that it is wrong to believe that “our intelligence is fixed by the genes we inherit.” (Is that truly a comforting idea? I would have been as smart as Albert Einstein, for example, but I watched too much TV as a kid and didn’t work hard enough as an adult?)

We would be as tall as the Dutch if only we were smart enough to build a bigger government (2nd largest welfare state, as a percentage of GDP, is not enough to grow tall!):

The economy of the United States, the biggest in the world, has not protected it from a height stagnation. Height experts have argued that the country’s economic inequality is partly to blame. Medical care is so expensive that millions go without insurance and many people don’t get proper medical care. Many American women go without prenatal care during pregnancy, while expectant mothers in the Netherlands get free house calls from nurses.

How do intelligence distributions change over time, given that environment is supposed to be a huge factor?

Intelligence is also a surprisingly durable trait. On June 1, 1932, the government of Scotland tested almost every eleven-year-old in the country—87,498 all told—with a seventy-one-question exam. The students decoded ciphers, made analogies, did arithmetic. The Scottish Council for Research in Education scored the tests and analyzed the results to get an objective picture of the intelligence of Scottish children. Scotland carried out only one more nationwide exam, in 1947. Over the next couple of decades, the council analyzed the data and published monographs before their work slipped away into oblivion.

Deary, Whalley, and their colleagues moved the 87,498 tests from ledgers onto computers. They then investigated what had become of the test takers. Their ranks included soldiers who died in World War II, along with a bus driver, a tomato grower, a bottle labeler, a manager of a tropical fish shop, a member of an Antarctic expedition, a cardiologist, a restaurant owner, and an assistant in a doll hospital. The researchers decided to track down all the surviving test takers in a single city, Aberdeen. They were slowed down by the misspelled names and erroneous birth dates. Many of the Aberdeen examinees had died by the late 1990s. Others had moved to other parts of the world. And still others were just unreachable. But on June 1, 1998, 101 elderly people assembled at the Aberdeen Music Hall, exactly sixty-six years after they had gathered there as eleven-year-olds to take the original test. Deary had just broken both his arms in a bicycling accident, but he would not miss the historic event. He rode a train 120 miles from Edinburgh to Aberdeen, up to his elbows in plaster, to witness them taking their second test. Back in Edinburgh, Deary and his colleagues scored the tests. Deary pushed a button on his computer to calculate the correlation between their scores as children and as senior citizens. The computer spat back a result of 73 percent. In other words, the people who had gotten relatively low scores in 1932 tended to get relatively low scores in 1998, while the high-scoring children tended to score high in old age.

If you had looked at the score of one of the eleven-year-olds in 1933, you’d have been able to make a pretty good prediction of their score almost seven decades later. Deary’s research prompted other scientists to look for other predictions they could make from childhood intelligence test scores. They do fairly well at predicting how long people stay in school, and how highly they will be rated at work. The US Air Force found that the variation in [general intelligence] among its pilots could predict virtually all the variation in tests of their work performance. While intelligence test scores don’t predict how likely people are to take up smoking, they do predict how likely they are to quit. In a study of one million people in Sweden, scientists found that people with lower intelligence test scores were more likely to get into accidents.

IQ is correlated with longevity:

But Deary’s research raises the possibility that the roots of intelligence dig even deeper. When he and his colleagues started examining Scottish test takers in the late 1990s, many had already died. Studying the records of 2,230 of the students, they found that the ones who had died by 1997 had on average a lower test score than the ones who were still alive. About 70 percent of the women who scored in the top quarter were still alive, while only 45 percent of the women in the bottom quarter were. Men had a similar split. Children who scored higher, in other words, tended to live longer. Each extra fifteen IQ points, researchers have since found, translates into a 24 percent drop in the risk of death.

The author reports that twins separated at birth have almost identical IQs, despite completely different childhood environments. With most other personal characteristics, this would lead to the conclusion that intelligence was mostly heritable. Instead, however, Zimmer points out that if heritability is not 100 percent then it would be a mistake to call something “genetic”:

Intelligence is far from blood types. While test scores are unquestionably heritable, their heritability is not 100 percent. It sits instead somewhere near the middle of the range of possibilities. While identical twins often end up with similar test scores, sometimes they don’t. If you get average scores on intelligence tests, it’s entirely possible your children may turn out to be geniuses. And if you’re a genius, you should be smart enough to recognize your children may not follow suit. Intelligence is not a thing to will to your descendants like a crown.

To bolster the claim that intelligence is not heritable, the book cites examples of children whose mothers were exposed to toxic chemicals during pregnancy. Also examples that staying in school for additional years raises IQ (a measure of symbol processing efficiency).

Here’s an interesting-sounding study:

In 2003, Eric Turkheimer of the University of Virginia and his colleagues gave a twist to the standard studies on twins. To calculate the heritability of intelligence, they decided not to just look at the typical middle-class families who were the subject of earlier studies. They looked for twins from poorer families, too. Turkheimer and his colleagues found that the socioeconomic class determined how heritable intelligence was. Among children who grew up in affluent families, the heritability was about 60 percent. But twins from poorer families showed no greater correlation than other siblings. Their heritability was close to zero.

(Do we believe that heritability is zero because identical twins and siblings both have highly correlated IQs? Earlier in the book, the author describes how hospitals and doctors often misclassify twins.)

Buried in the next section is that this finding is not straightforward to replicate (“Why Most Published Research Findings Are False”):

If you raise corn in uniformly healthy soil, with the same level of abundant sunlight and water, the variation in their height will largely be the product of the variation in their genes. But if you plant them in a bad soil, where they may or may not get enough of some vital nutrient, the environment will be responsible for more of their differences. Turkheimer’s study hints that something similar happens to intelligence. By focusing their research on affluent families—or on countries such as Norway, where people get universal health care—intelligence researchers may end up giving too much credit to heredity. Poverty may be powerful enough to swamp the influence of variants in our DNA. In the years since Turkheimer’s study, some researchers have gotten the same results, although others have not. It’s possible that the effect is milder than once thought. A 2016 study pointed to another possibility, however. It showed that poverty reduced the heritability of intelligence in the United States, but not in Europe. Perhaps Europe just doesn’t impoverish the soil of its children enough to see the effect. Yet there’s another paradox in the relationship between genes and the environment. Over time, genes can mold the environment in which our intelligence develops. In 2010, Robert Plomin led a study on eleven thousand twins from four countries, measuring their heritability at different ages. At age nine, the heritability of intelligence was 42 percent. By age twelve, it had risen to 54 percent. And at age seventeen, it was at 68 percent. In other words, the genetic variants we inherit assert themselves more strongly as we get older. Plomin has argued that this shift happens as people with different variants end up in different environments. A child who has trouble reading due to inherited variants may shy away from books, and not get the benefits that come from reading them.

Poverty in the cruel U.S. crushes children! But, measured in terms of consumption, an American family on welfare actually lives better than a lot of middle class European families. The author praises the Europeans with their universal health care systems, but 100 percent of poor American children qualify for Medicaid, a system of unlimited health care spending (currently covering roughly 75 million people).

If unlimited taxpayer-funded Medicaid isn’t sufficient to help poor American children reach their genetic potential, maybe early education will? The book quotes a Head Start planner: “The fundamental theoretical basis of Head Start was the concept

Full post, including comments

Government should sponsor video games about pandas?

As with previous generations of politicians, Donald Trump is critical of pop culture, specifically violent video games, which he says are partly to blame for recent mass shootings (e.g., in Dayton).

(typical historical example: Tipper Gore was upset about pop song lyrics while then-husband Al Gore was fighting climate change with “an environmentalist named to a prominent cabinet position by Gore when he was vice president, a sexy Hollywood actress, a gorgeous massage therapist and a Tennessee Titans cheerleader.”)

Maybe the Second Amendment can be reinterpreted into irrelevance, but we still have the First Amendment that allows video game companies to publish shooting games.

If the Federales want to promote non-violent games, should there be a fund to compensate developers of games about pandas and other anodyne subjects? If the games are free, developers/publishers can compete for funds by demonstrating how many hours per day people are playing.

Presumably the typical mass shooting perpetrator is suffering from social isolation, so the funding would be increased for games that require participants to cooperate.

(I personally would rather see the U.S. rearchitected into Latin American-style towns with public squares rather than our current inherently isolating suburban sprawl; see my non-profit ideas page for what I wish our Africa-focused billionaires would spend their money on:

Latin Americans often come up near the very top of the world’s happiest people, despite a material prosperity that is very pale compared to that we enjoy in the United States. Nearly every small town in Latin America is built around a central plaza where the citizens gather at various hours to meet friends, play chess, eat meals in restaurants, etc. Small streets radiate from the plaza and hold all of the shops that are essential to daily life, including supermarkets and hardware stores. Housing is built up to a three story height, dense enough to support businesses, but not so dense that people are isolated in concrete towers with elevators. Smaller workshops are mixed in with housing, introducing young people to the texture of business.

The U.S. offers some enjoyable walkable neighborhoods, mostly developed before the rise of the automobile. Examples include many neighborhoods within New York City, San Francisco, Chicago, and Boston. These neighborhoods, however, are small and can hold only a tiny minority of Americans. Consequently, houses within walkable neighborhoods typically cost over $1 million. As the U.S. population heads toward 500 million, these livable neighborhoods will become even more out of reach of the average citizen.

It might also help if we didn’t offer the world’s most lucrative incentives for becoming a “single parent,” thereby leading to the world’s highest rate of children reared without two parents.)

Related:

Full post, including comments

Keep obscure languages (such as Irish) alive via free videogames?

My Irish host’s son is just finishing what we would call high school. At great cost to the Irish taxpayer and himself he is now fluent in Irish. I asked whether this had any practical value. “Not really,” he replied. “There are only about 80,000 speakers of Irish.” Had he ever used Irish outside of the classroom or organized immersion program? “No.” Would he be able to use Irish to shop at the local supermarket or any other nearby merchant? “Not a chance.”

Did the Irish language have any communication value? I.e., among those 80,000 speakers were there any who did not speak English? “Maybe somewhere on the Aran Islands you could find one person.”

How can he possible maintain his fluency under these circumstances?

One idea: Dedicate 1 percent of the current Irish instruction budget to developing video games and apps that require reading, writing, listening, and speaking Irish. Give them away free. Refuse to make a version in any other language, no matter how popular a game becomes. If successful, maybe young people in China will learn Irish so as to be able to enjoy the games.

Readers: Could this work? Would it be more cost-effective than other methods of keeping a mostly-dead language alive?

Full post, including comments

Americans with elite educations advocate for socialism because they are shocked at not being rich?

“If you’re so smart, why aren’t you rich?” was a common expression in New York City during my father’s youth (Great Depression and World War II).

I’m wondering if this way of thinking explains why so many Americans who’ve obtained degrees from elite institutions and earn above-median wages are advocates of socialism. On the face of it, it doesn’t seem rational for people who earn 4-5X the median wage to say that income inequality is a national emergency and to be more enthusiastic about socialism than are people who earn below-median wages.

Pre-2016, my neighbors here in Eastern Massachusetts were upset when politicians and bureaucrats in Washington, D.C. would make decisions without consulting them. Since they knew themselves to be the smartest folks on the planet, why wouldn’t President Obama, the Wise One, call them up to ask for advice? Upset turned to rage following the country’s choice of Donald Trump.

What’s even more upsetting than not having one’s desired level of political influence? Not having one’s fair level of financial reward.

In a fair market, someone with a Ph.D. in humanities would get paid more than someone with a high school degree, at least if the Ph.D. in humanities is allowed to define “fair.” Yet an American bond trader with a high school degree can easily earn 10X what a liberal arts professor may earn (100X if we compare to an adjunct!). Thus we come to slightly newer adage: “When the market gives you an answer you don’t like, declare market failure.”

Readers: What do you think? What accounts for people with incomes that are well above the median advocating for “socialism”, which would tend to narrow the income distribution? Could it be rational? As the U.S. population expands and there is a brutal competition for scraps of desirable real estate, for example, will it help the Ph.D. academic to afford a beach house if central planners won’t give the bond trader enough to buy 10 beach houses?

Full post, including comments