The latest (somewhat random) collection of recent essays and stories from around the web that have caught my eye and are worth plucking out to be re-read.
Am I a bad feminist?
Margaret Atwood, Globe & Mail, 13 January 2018
It seems that I am a “Bad Feminist.” I can add that to the other things I’ve been accused of since 1972, such as climbing to fame up a pyramid of decapitated men’s heads (a leftie journal), of being a dominatrix bent on the subjugation of men (a rightie one, complete with an illustration of me in leather boots and a whip) and of being an awful person who can annihilate – with her magic White Witch powers – anyone critical of her at Toronto dinner tables. I’m so scary! And now, it seems, I am conducting a War on Women, like the misogynistic, rape-enabling Bad Feminist that I am.
What would a Good Feminist look like, in the eyes of my accusers?
My fundamental position is that women are human beings, with the full range of saintly and demonic behaviours this entails, including criminal ones. They’re not angels, incapable of wrongdoing. If they were, we wouldn’t need a legal system.
Nor do I believe that women are children, incapable of agency or of making moral decisions. If they were, we’re back to the 19th century, and women should not own property, have credit cards, have access to higher education, control their own reproduction or vote. There are powerful groups in North America pushing this agenda, but they are not usually considered feminists.
Furthermore, I believe that in order to have civil and human rights for women there have to be civil and human rights, period, including the right to fundamental justice, just as for women to have the vote, there has to be a vote. Do Good Feminists believe that only women should have such rights? Surely not. That would be to flip the coin on the old state of affairs in which only men had such rights.
So let us suppose that my Good Feminist accusers, and the Bad Feminist that is me, agree on the above points. Where do we diverge? And how did I get into such hot water with the Good Feminists?
Read the full article in the Globe & Mail.
The moral economy of the Iranian protests
Kaveh Ehsani & Arang Keshavarzian, Jacobin, 11 January 2018
The recent demonstrations in Iran have been noteworthy for their geographic scope and range of grievances. Triggered by discontent over persistent unemployment and inflation, long overdue wages and pensions, the reduction of cash subsidies, environmental degradation, and the collapse of murky financial institutions that turned out to be Ponzi schemes, the protests have been taking place primarily in provincial towns. At their core, these protests are a moral outcry of the marginalized periphery against what it perceives to be a callous center and its betrayal of the social justice vision that animated and united the revolutionary forces of 1979.
Local dissent has been a regular and widespread feature of Iranian politics, especially since the end of the Iran-Iraq War in 1988. Yet, these receive little attention in the Western media, which focus instead on the views of the Supreme Leader, factional rivalries among elites, and the nuclear program. Even a cursory glance at the Iranian press in the past three decades reveals a constant current of protests by teachers, nurses, bus drivers, industrial and agrarian workers, conscripts, students, pensioners, and others over broken promises and work conditions. They persist despite being dealt with harshly by the authorities.
These citizens are not all poor, propertyless, or uneducated, but they have been suffering from high youth unemployment, a housing market disfigured by speculation, relaxed labor regulations, and the general inability to live the life promised by their educational status or even to match their parents’ standard of living. Because the Islamic Republic is adept at repressing formal avenues of grassroots representation like functioning political parties, independent associations, and trade unions, these grievances previously remained isolated and contained. Now they have exploded.
Read the full article in Jacobin.
As a 2-state solution loses steam, a 1-state plan gains traction
David M Halbfinger, New York Times, 5 January 2018
Saeb Erekat, the veteran Palestinian negotiator, said that Mr. Trump’s declaration was the death knell for the two-state solution and that Palestinians should shift their focus to ‘one state with equal rights.’ His position has since gained traction among the Palestinian leadership.
Under that idea, the Palestinian movement would shift to a struggle for equal civil rights, including the freedoms of movement, assembly and speech, and the right to vote in national elections. ‘Which could mean a Palestinian could be the prime minister,’ Mr. Barghouti said.
To its Palestinian supporters, the one-state idea is bitter consolation after decades of striving for statehood under the Oslo peace accords, which many believe has achieved little aside from providing cover, and buying time, for Israel to expand settlements.
‘When you support the two-state solution, you’re supporting Netanyahu,’ said As’ad Ghanem, a political science lecturer at the University of Haifa who has been working with a group of Israelis and Palestinians on a one-state strategy for some time. ‘It is time for us Palestinians to present an alternative.’
Several efforts are underway. A decade-old group called the Popular Movement for One Democratic State, led by Radi Jarai, a former Fatah leader who served 12 years in prison in Israel after helping to lead the 1987 intifada, is planning a media campaign to explain the idea to West Bank residents.
‘They think it means Palestinians will take the Israeli ID and live under an apartheid regime,’ he said. ‘But our idea is to have one democratic state, with no privilege for the Jews or for any other ethnic or religious group.’
Others are talking about drafting a prototype constitution for a single state or forming a political party in Israel and on the West Bank to push for it. ‘At least 30 percent of Palestinians support one-state when no one is talking about it,’ said Hamada Jaber, an organizer of a group called the One State Foundation. ‘If there’s at least one political party on each side that will talk about it and adapt this strategy, the support will grow.’
The idea has stronger support among the young, said Khalil Shikaki, a Palestinian pollster, particularly students and professionals who have clamored for a change in strategy since the Arab spring in 2011.
Read the full article in the New York Times.
The man who gave white nationalism a new life
J Lester Feder & Pierre Buet, Buzzfeed News, 27 December 2017
His core arguments are at the heart of many nationalist movements around the world, echoed even by those who do not know his name. His work helped give an aura of respectability to the notion that European ‘identity’ needs to be defended against erasure by immigration, global trade, multinational institutions, and left-wing multiculturalism.
Today, de Benoist generally avoids social media and remains very much a man of the printed page. His Paris apartment is a refuge from the country home where he keeps a personal library of more than 200,000 volumes, a collection so vast he says it has become a burden. His study houses an art collection that includes a modernist portrait of de Benoist with his face encased in what appears to be a mask of metal. A poster for a talk he once gave in Turkey hangs on the bathroom wall, opposite a poster featuring different breeds of cats.
He now sees himself as more left than right and says he would have voted for Bernie Sanders in the 2016 US election. (His first choice in the French election was the leftist candidate Jean-Luc Mélenchon.) He rejects any link between his New Right and the alt-right that supported Donald Trump.’Maybe people consider me their spiritual father, but I don’t consider them my spiritual sons,’ he said.
De Benoist’s views have changed a lot over his career, and he has written so extensively and in such dense prose that it can be hard to figure out what he believes today. (For English speakers, his challenge is complicated further by how little of his work has been translated.) He’s denounced racism but opposes integration. He rejects demands that immigrants assimilate or ‘remigrate’ but laments ‘sometimes-brutal’ changes they bring to European communities. He says identities change over time but wants them to be ‘strong.’ He disavows the alt-right but collaborates with some of the most prominent people associated with the movement.
Over the course of an afternoon, he grew frustrated with questions about how his ideas link to today’s politics, saying, ‘You treat the New Right as a political subject, but for us it is an intellectual subject.’
It wasn’t the far right that brought de Benoist’s writings to the United States. A left-wing journal called Telos, which was drawn to de Benoist’s critique of US foreign policy, first published his work in 1990s. Telos translated his Manifesto for a European Renaissance in 1999, in which he laid out a philosophy that has become known as ‘ethnopluralism’ — arguing that all ethnic groups have a common interest in defending their ‘right to difference’ and opposing all forces that threaten to erase boundaries between ‘strong identities.’
Whatever his intentions, this argument caught the eye of a new generation of white nationalists, in whose hands ethnopluralism became a kind of upside-down multiculturalism. They were not white supremacists, they claimed, but they believed that everyone was better off in a world where ethnicities were separate but — at least theoretically — equal.
Read the full article in Buzzfeed News.
What to look out for in science in 2018
Philip Ball, Homunculus, 5 January 2018
This will be the year when we see a quantum computer solve some computational problem beyond the means of the conventional ‘classical’ computers we currently use. Quantum computers use the rules of quantum mechanics to manipulate binary data – streams of 1s and 0s – and this potentially makes them much more powerful than classical devices. At the start of 2017 the best quantum computers had only around 5 quantum bits (qubits), compared to the billions of transistor-based bits in a laptop. By the close of the year, companies like IBM and Google said that they are testing devices with ten times that number of qubits. It still doesn’t sound like much, but many researchers think that just 50 qubits could be enough to achieve ‘quantum supremacy’ – the solution of a task that would take a classical computer so long as to be practically impossible. This doesn’t mean that quantum computers are about to take over the computer industry. For one thing, they can so far only carry out certain types of calculation, and dealing with random errors in the calculations is still extremely challenging. But 2018 will be the year that quantum computing changes from a specialized game for scientists to a genuine commercial proposition.
Using quantum rules for processing information has more advantages than just a huge speed-up. These rules make possible some tricks that just aren’t imaginable using classical physics. Information encoded in qubits can be encrypted and transmitted from a sender to a receiver in a form that can’t be intercepted and read without that eavesdropping being detectable by the receiver, a method called quantum cryptography. And the information encoded in one particle can in effect be switched to another identical particle in a process dubbed quantum teleportation. In 2017 Chinese researchers demonstrated quantum teleportation in a light signal sent between a ground-based source and a space satellite. China has more ‘quantum-capable’ satellites planned, as well as a network of ground-based fibre-optic cables, that will ultimately comprise an international ‘quantum internet’. This network could support cloud-based quantum computing, quantum cryptography and surely other functions not even thought of yet. Many experts put that at a decade or so off, but we can expect more trials – and inventions – of quantum network technologies this year.
The announcement in December of a potential new treatment for Huntington’s disease, an inheritable neurodegenerative disease for which there is no known cure, has implications that go beyond this particularly nasty affliction. Like many dementia-associated neurodegenerative diseases such as Parkinson’s and Alzheimer’s, Huntington’s is caused by a protein molecule involved in regular brain function that can ‘misfold’ into a form that is toxic to brain cells. In Huntington’s, which currently affects around 8,500 people in the UK, the faulty protein is produced by a mutation of a single gene. The new treatment, developed by researchers at University College London, uses a short strand of DNA that, when injected into the spinal cord, attaches to an intermediary molecule involved in translating the mutated gene to the protein and stops that process from happening. The strategy was regarded by some researchers as unlikely to succeed. The fact that the current preliminary tests proved dramatically effective at lowering the levels of toxic protein in the brain suggests that the method might be a good option not just for arresting Huntington’s but other similar conditions, and we can expect to see many labs trying it out. The real potential of this new drug will become clearer when the Swiss pharmaceuticals company Roche begins large-scale clinical trials.
Read the full article in Homunculus.
Why are so many Americans crowdfunding their healthcare?
Barney Jopson, FT Magazine, 10 January 2018
Online crowdfunding began in the late 1990s as a way for musicians and film-makers to raise money from fans. Within a decade it had begun to take hold for personal causes. The founders of GoFundMe, the sector leader, initially envisaged helping users pay for holidays and weddings. But by 2009 they began to notice people were using the site more to cope with crises, including medical emergencies. They embraced the trend. The company now calls itself ‘The World’s No.1 Site for Medical, Illness & Healing fundraising’ and says it has raised $5bn for users globally since 2010.
It should be no surprise that healthcare proved fertile territory. Crowdfunding is also growing in the UK, Australia and Canada but is generally confined to treatments not covered by public healthcare systems. In the US it is often about paying for the basics. American medicine is big business and the US spends more on it than any other nation, yet it is the only developed country that lacks universal healthcare coverage. A fifth of US household spending went on healthcare in 2013, compared with just 4 per cent in the EU, according to Eurostat, a statistics agency.
Despite this, Americans are in worse health. Judged on measures including life expectancy and infant mortality, the US ranked last for healthcare outcomes among 11 high-income countries in a study last year by the Commonwealth Fund, a New York-based research foundation.
The medical-industrial complex, shaped by decades of policy grounded in a belief that private is better than public, is as vast as it is incoherent. For many patients, it is synonymous with dysfunction and unmanageable expense. According to a Gallup poll last December, 72 per cent of Americans believe the US healthcare system ‘has major problems’ or is ‘in a state of crisis’. The strains are all too apparent. From 2005 to 2013, medical bills were the single largest cause of consumer bankruptcy in the US, according to Daniel Austin, a Northeastern University law professor.
‘Who among us hasn’t opened a medical bill or an explanation of benefits statement and stared in disbelief at terrifying numbers?’ asks Elisabeth Rosenthal, a physician-turned-author, in her 2017 book, An American Sickness. ‘It is easy to feel helpless.’ Sites like GoFundMe and YouCaring offer the chance to regain some control. Their pages reveal campaigns for people facing strokes, leukaemia, Lyme disease, kidney transplants and muscular dystrophy. There are limbs lost, bones shattered and organs ruptured in car crashes and mountain falls and shootings.
Read the full article in the FT Magazine.
Brexit: why the UK economy hasn’t led to buyer’s remorse
Larry Elliott, Guardian, 7 January 2018
No question there were those in the remain camp who, despite the obvious flaws in the European project, genuinely thought nothing good could ever come of Brexit and it would be the poor and the vulnerable who had voted leave who would suffer most from what they saw as its inevitable baleful consequences. There was, though, a snobbish and nasty subtext to the buyer’s remorse theory, which was that the plebs were too dumb to know what they were voting for.
Yet it was always a long shot that a second referendum would come about by these means and so it has proved. Eighteen months on and there has been little sign of buyer’s remorse.
In part, that is because people voted remain or leave in the referendum for complex reasons. The referendum was never just about economics, and in retrospect it was a strategic blunder on the part of the remain camp to fight only on what the vote would mean for GDP per head and house prices.
Another reason why buyer’s remorse has not set in is that the country – or rather the part of the country (by far the bigger part) that is not obsessed with Brexit – has moved on. There are Brexit fanatics, there are remain fanatics, and in between there are millions of people who were asked for a decision in June 2016, made it and now expect democracy to take its course. They have switched off from Brexit in just the same way that they switch off from politics between general elections…
All of which brings us to the final problem with the buyer’s remorse theory: its proponents have spent so much time banging on about how terrible Brexit will be that they have neglected to come up with any solutions for tackling the reasons people voted for Brexit in the first place: low wages, job insecurity, the feeling that they were not being listened to. Remainers have latched on to any piece of negative economic news – no matter how trivial – in the hope that this will lead to change of heart among leave voters. But they have struggled to sketch out a plan for dealing with Britain’s structural economic problems, which were there before 23 June 2016 and will still be there whether or not the referendum result is overturned.
Read the full article in the Guardian.
Fighting fake news is not the solution
Masha Gessen, New Yorker, 4 January 2018
In the popular imagination, the public is divided into two segments of roughly equal size: the ‘liberal bubble’ and the ‘right-wing bubble.’ In fact, there has never been much evidence that this picture was true, and two recent data points contribute to disproving it. One is a large study of the reach and impact of fake news; the other is opinion-poll data on the tax-reform bill that Congress passed and President Trump signed into law in December. Together, they burst the two-bubble theory by showing that most Americans are better informed and less gullible than you might think. That, in turn, suggests that fighting ‘fake news’ is not the solution, or perhaps even a solution, to our current political problems.
For the fake-news study, three political scientists from three different universities—Andrew Guess, from Princeton, Brendan Nyhan, from Dartmouth, and Jason Reifler, from the University of Exeter—combined data on Web traffic in the month before and one week after the election with responses to an online public-opinion survey by 2,525 Americans to determine who consumed fake news, and how much. For their definition of ‘fake news,’ the authors relied on an earlier study by the economists Hunt Alcott and Matthew Gentzkow, who looked at stories that are ‘intentionally and verifiably false and could mislead readers.’ The economists’ study suggested that every American adult had been exposed to at least one fake news story in the leadup to the 2016 election, but relatively few people—roughly eight per cent—actually believed them.
The economists’ study was based on what people recalled seeing; the new study by the political scientists uses more and harder data. The conclusions, however, are fairly similar. First, they found that Trump supporters read fake pro-Trump stories while Clinton supporters read fake pro-Clinton stories, but the latter group consumed a lot less fake news than did the former. The study did not directly address the question of how far in advance of reading the fake stories these voting preferences had been cemented; the authors concluded that ‘the ‘echo chamber’ is deep . . . but narrow.’ The biggest news, in other words, is that while many people were exposed to fake news stories, few were taken in by them as measured by how many similar articles they went on to read. About ten per cent of news consumers sought out more fake news, and they read an average of 33.16 fake stories, according to the political scientists.
An earlier study that used a different approach to data collection—combining social-media sharing, hyperlinking patterns, and language use—yielded similar results. In an article published in the Columbia Journalism Review last March, scholars and researchers from Harvard, Ritsumeikan, and the Massachusetts Institute of Technology argued that, in media consumption, ‘polarization was asymmetric.’ In other words, rather than two bubbles, there was one, positioned far to the right of the political spectrum. A majority of Americans, the study showed, get their news from a variety of different media. They are routinely exposed to opinions they don’t share; they do not live in an echo chamber.
Read the full article in the New Yorker.
Margaret Wertheim, Aeon, 10 January 2018
Long before physicists embraced the Euclidean vision, painters had been pioneering a geometrical conception of space, and it is to them that we owe this remarkable leap in our conceptual framework. During the late Middle Ages, under a newly emerging influence deriving from Plato and Pythagoras, Aristotle’s prime intellectual rivals, a view began to percolate in Europe that God had created the world according to the laws of Euclidean geometry. Hence, if artists wished to portray it truly, they should emulate the Creator in their representational strategies. From the 14th to the 16th centuries, artists such as Giotto, Paolo Uccello and Piero della Francesca developed the techniques of what came to be known as perspective – a style originally termed ‘geometric figuring’. By consciously exploring geometric principles, these painters gradually learned how to construct images of objects in three-dimensional space. In the process, they reprogrammed European minds to see space in a Euclidean fashion.
The historian Samuel Edgerton recounts this remarkable segue into modern science in The Heritage of Giotto’s Geometry (1991), noting how the overthrow of Aristotelian thinking about space was achieved in part as a long, slow byproduct of people standing in front of perspectival paintings and feeling, viscerally, as if they were ‘looking through’ to three-dimensional worlds on the other side of the wall. What is so extraordinary here is that, while philosophers and proto-scientists were cautiously challenging Aristotelian precepts about space, artists cut a radical swathe through this intellectual territory by appealing to the senses. In a very literal fashion, perspectival representation was a form of virtual reality that, like today’s VR games, aimed to give viewers the illusion that they had been transported into geometrically coherent and psychologically convincing other worlds.
The illusionary Euclidean space of perspectival representation that gradually imprinted itself on European consciousness was embraced by Descartes and Galileo as the space of the real world. Worth adding here is that Galileo himself was trained in perspective. His ability to represent depth was a critical feature in his groundbreaking drawings of the Moon, which depicted mountains and valleys and implied that the Moon was as solidly material as the Earth.
By adopting the space of perspectival imagery, Galileo could show how objects such as cannonballs moved according to mathematical laws. The space itself was an abstraction – a featureless, inert, untouchable, un-sensable void, whose only knowable property was its Euclidean form. By the end of the 17th century, Isaac Newton had expanded this Galilean vision to encompass the universe at large, which now became a potentially infinite three-dimensional vacuum – a vast, quality-less, emptiness extending forever in all directions. The structure of the ‘real’ had thus been transformed from a philosophical and theological question into a geometrical proposition.
Read the full article in Aeon.
The myth of ‘populism’
Anton Jager, Jacobin, 3 January 2018
In public consciousness, however — and certainly in European debates — the Hofstadter thesis is alive and well. It’s hard to find a contemporary pundit who doesn’t see populism and proto-fascism as implicit synonyms and who doesn’t cast the late-nineteenth-century Populists as first-class bigots.
And despite the discrediting of his thesis, Hofstadter’s influence may well account for this legacy. Political science, for example, never had its own Hofstadter debate, though the field was heavily influenced by it. In the 1960s, modernization theorists like Seymour Martin Lipset, Daniel Bell, and Edward Shils (often colleagues of Hofstadter) integrated the term into their own models of a new social science. There, the word populism, used as a synonym for ‘illiberal democracy,’ fared rather well. Political scientists globalized the concept to fit patterns of modernization in regions such as Latin America, East Asia, and Africa. In these instances, they described democratic movements as populist if they didn’t conform to the rules of the liberal game — populism as ‘democracy without the rule of law.’
Thanks to modernization theory’s influence in the 1960s and 1970s, this vision also had a strong effect on European debates. By the end of the 1960s, populism was on every political scientist’s lips.
Its heritage, however, remained problematic — as became even clearer in 1968. Around that time, two English historians decided to organize a conference at the London School of Economics. A variety of researchers attended: Isaiah Berlin, Ernest Gellner, Ghita Ionescu, and — most interestingly — Hofstadter himself.
At the end of the conference, Hofstadter joined the conversation. He started his speech by admitting that he was slightly dazzled by the wide range of movements classified as ‘populism’; he had expected nothing but a discussion of Russian and American variants. He also conceded defeat in the revisionist controversy — the ‘genetic affiliation’ between McCarthyism and ‘earlier agrarian movements’ was ‘doubtless miscarried,’ he said. Even if the John Birchers and other ‘paranoid-style’ politicos did ‘twang some populist strings,’ they no longer qualified as ‘substantial’ populists.
This confession, however, did not stem the concept’s rise in European academia. While Hofstadter admitted his mistakes, European political scientists became even more enthusiastic about his version of small-p populism. In the 1980s, Hofstadter’s thesis gained further traction in European political science departments, most interestingly in France.
Read the full article in Jacobin.
Collision with reality: What depth psychology can tell us about victimhood culture
Lisa Marchiano, Quilette, 27 December 2017
An October 2017 New York Times article entitled ‘Why Are More American Teenagers Than Ever Suffering From Extreme Anxiety?’ looked at the rising tide of teen anxiety in the United States. Increasing academic pressures, the advent of smart phones, and ubiquitous social media use were explored as potential contributors to increasing teen anxiety, but the article implicated another factor as well – school cultures that enable young people to avoid those things that make them uncomfortable. Special educational 504 plans address student anxieties by allowing kids to leave class early, use special entrances, and seek out safe spaces when they are feeling overwhelmed. A therapist interviewed for the Times article worries that these kinds of ‘avoidance-based’ accommodations only make anxiety worse by sending the message to kids that they are too fragile to handle things that make them uncomfortable.
Essentially, such adaptations to anxiety cultivate an external locus of control, teaching young people that they are not capable of handling challenge, and encouraging them to believe that the world around them ought to be altered to meet their needs. This primes people to expect life to conform to their expectations, and to feel crushed or outraged when it doesn’t. It promotes fragility, as young people wait helplessly to be acted upon.
The Times article profiles a New Jersey high school that has developed a dedicated program to meet the needs of anxious students. It relates an encounter between Paul Critelli, one of the program’s teachers, and a withdrawn, anxious student who claimed he had nothing to do.
Critelli looked at him incredulously. ‘Dude, you’re failing physics,’ Critelli said. ‘What do you mean you don’t have anything to do?’
‘There’s nothing I can do — I’m going to fail,’ the student mumbled.
Critelli’s student evidences an extreme external locus of control. He has collapsed utterly into victimhood, to the point that he is not able to imagine a way to advocate for himself or affect the outcome of his grade.
If anxiety is our chief malady, avoidance is its coddling nurse, always ready to assure us we need not risk confrontation with that which makes us uncomfortable. When we heed our fear, we stay safe, but we also stay out of life. Jung never forgot about the dangers of avoidance. Some 25 years after his period of school refusal, Jung wrote the following:
Life calls us forth to independence, and anyone who does not heed this call because of childhood laziness or timidity is threatened with neurosis. And once this has broken out, it becomes an increasingly valid reason for running away from life and remaining forever in the morally poisonous atmosphere of infancy.
I’ve seen the adults that teens who withdraw from the life’s arena become. In my consulting room, they speak of lives unlived, and suffering unredeemed. It isn’t just that the world misses out on their talents and productive capacity. (Though that is no small loss – imagine if 12-year-old Carl hadn’t overhead his father’s conversation that day.) It’s that the story they came into the world to tell doesn’t get told.
Read the full article in Quillette.
The African Enlightenment
Dag Herbjørnsrud, Aeon, 13 December 2017
Ethiopia was no stranger to philosophy before Yacob. Around 1510, the Book of the Wise Philosophers was translated and adapted in Ethiopia by the Egyptian Abba Mikael. It is a collection of sayings from the early Greek Pre-Socratics, Plato, and Aristotle via the neo-Platonic dialogues, and is also influenced by Arabic philosophy and the Ethiopian discussions. In his Hatäta, Yacob criticises his contemporaries for not thinking independently, but rather accepting the claims of astrologers and soothsayers just because their predecessors did so. As a contrast, he recommends an enquiry based on scientific rationality and reason – as every human is born with intelligence and is of equal worth.
Far away, grappling with similar questions, was Yacob’s French contemporary Descartes (1596-1650). A major philosophical difference is that the Catholic Descartes explicitly denounced ‘infidels’ and atheists, whom he called ‘more arrogant than learned’ in his Meditations on First Philosophy (1641). This perspective is echoed in Locke’s A Letter Concerning Toleration (1689), which concludes that atheists ‘are not at all to be tolerated’. Descartes’s Meditations was dedicated to ‘the dean and doctors of the sacred Faculty of Theology in Paris’, and his premise was ‘to accept by means of faith the fact that the human soul does not perish with the body, and that God exists’.
In contrast, Yacob shows a much more agnostic, secular and enquiring method – which also reflects an openness towards atheistic thought. Chapter four of the Hatäta starts with a radical question: ‘Is everything that is written in the Holy Scriptures true?’ He goes on to point out that all the different religions claim theirs is the true faith:
Indeed each one says: ‘My faith is right, and those who believe in another faith believe in falsehood, and are the enemies of God.’ … As my own faith appears true to me, so does another one find his own faith true; but truth is one.
In this way, Yacob opens up an enlightened discourse on the subjectivity of religion, while still believing in some kind of universal Creator. His discussion of whether or not there is a God is more open-minded than Descartes’s, and possibly more accessible to modern-day readers, as when he incorporates existentialist perspectives:
Who is it that provided me with an ear to hear, who created me as a rational being and how have I come into this world? Where do I come from? Had I lived before the creator of the world, I would have known the beginning of my life and of the consciousness of myself. Who created me?
In chapter five, Yacob applies rational investigation to the different religious laws. He criticises Christianity, Islam, Judaism and Indian religions equally. For example, Yacob points out that the Creator in His wisdom has made blood flow monthly from the womb of women, in order for them to bear children. Thus, he concludes that the law of Moses, which states that menstruating women are impure, is against nature and the Creator, since it ‘impedes marriage and the entire life of a woman, and it spoils the law of mutual help, prevents the bringing up of children and destroys love’.
In this way, Yacob includes the perspectives of solidarity, women and affection in his philosophical argument. And he lived up to these ideals. After Yacob left the cave, he proposed to a poor maiden named Hirut, who served a rich family. Yacob argued with her master, who did not think a servant woman was equal to an educated man, but Yacob prevailed. When Hirut gladly accepted his proposal, Yacob pointed out that she should no longer be a servant, but rather his peer, because ‘husband and wife are equal in marriage’.
Read the full article in Aeon.
You’re decended from royalty and so is everyone else
Adam Rutherfoard, Nautilus, 4 January 2017
Joseph Chang is a statistician from Yale University and wished to analyze our ancestry not with genetics or family trees, but just with numbers. By asking how recently the people of Europe would have a common ancestor, he constructed a mathematical model that incorporated the number of ancestors an individual is presumed to have had (each with two parents), and given the current population size, the point at which all those possible lines of ascent up the family trees would cross. The answer was merely 600 years ago. Sometime at the end of the 13th century lived a man or woman from whom all Europeans could trace ancestry, if records permitted (which they don’t). If this sounds unlikely or weird, remember that this individual is one of thousands of lines of descent that you and everyone else has at this moment in time, and whoever this unknown individual was, they represent a tiny proportion of your total familial webbed pedigree. But if we could document the total family tree of everyone alive back through 600 years, among the impenetrable mess, everyone European alive would be able to select a line that would cross everyone else’s around the time of Richard II.
Chang’s calculations get even weirder if you go back a few more centuries. A thousand years in the past, the numbers say something very clear, and a bit disorienting. One-fifth of people alive a millennium ago in Europe are the ancestors of no one alive today. Their lines of descent petered out at some point, when they or one of their progeny did not leave any of their own. Conversely, the remaining 80 percent are the ancestor of everyone living today. All lines of ancestry coalesce on every individual in the 10th century…
Joseph Chang’s mathematical calculation didn’t account for something very obvious, which is that we don’t mate randomly. We typically marry within socioeconomic groups, within small geographical areas, within shared languages. But with Coop and Ralph’s genetic analysis, it didn’t seem to matter that much. Ancestry is such that genes can spread very quickly over generations. It might seem that a remote tribe would have been isolated from others for centuries in, for example, the Amazon. But no one is isolated indefinitely, and it only takes a very small number of people to breed out with people from beyond their direct gene pool for that DNA to rapidly descend through the generations.
Chang factored that into a further study of common ancestry beyond Europe, and concluded in 2003 that the most recent common ancestor of everyone alive today on Earth lived only around 3,400 years ago.
He used two calculations, one that simply crunched the math of ancestry, and another that incorporated a simplified model of towns and migration and ports and people. In the computer model, a port has a higher rate of immigration, and growth rates are higher. With all these and other factors input, the computer calculates when lines of ancestry cross, and the number comes out at around 1400 B.C. It places that person somewhere in Asia, too, but that is more likely to do with the geographical center point from which the migrations are calculated. If this sounds too recent, or baffling because of remote populations in South America or the islands of the South Pacific, remember that no population is known to have remained isolated over a sustained period of time, even in those remote locations. The influx of the Spanish into South America meant their genes spread rapidly into decimated indigenous tribes, and eventually to the most remote peoples. The inhabitants of the minuscule Pingelap and Mokil atolls in the mid-Pacific have incorporated Europeans into their gene pools after they were discovered in the years of the 19th century. Even religiously isolated groups such as the Samaritans, who number fewer than 800 and are sequestered within Israel, have elected to outbreed in order to expand their limited gene pool.
When Chang factored in new, highly conservative variables, such as reducing the number of migrants across the Bering Straits to one person every 10 generations, the age of the most recent common ancestor of everyone alive went up to 3,600 years ago.
Read the full article in Nautilus.
Selective exposure to misinformation
Andrew Guess, Brendan Nyhan & Jason Reifler,
Darmouth College, 20 December 2017
We estimate that 27.4% of Americans age 18 or older visited an article on a pro-Trump or pro- Clinton fake news website during our study period, which covered the final weeks of the 2016 election campaign (95% CI: 24.4%–30.3%). While this proportion may appear small, 27% of the voting age population in the United States is more than 65 million people. In total, articles on pro- Trump or pro-Clinton fake news websites represented an average of approximately 2.6% of all the articles Americans read on sites focusing on hard news topics during this period. The pro-Trump or pro-Clinton fake news that people read was heavily skewed toward Donald Trump — people saw an average (mean) of 5.45 articles from fake news websites during the study period of October 7–November 14, 2007. Nearly all of these were pro-Trump (average of 5.00 pro-Trump articles).
There are stark differences by candidate support in the frequency and slant of fake news website visits.3 We focus specifically in this study on respondents who reported supporting Hillary Clinton or Donald Trump in our survey (76% of our sample) because of our focus on selective exposure by candidate preference. People who supported Trump were far more likely to visit fake news websites — especially those that are pro-Trump — than Clinton supporters. Among Trump supporters, 40% read at least one article from a pro-Trump fake news website (mean = 13.1, 95% CI: 7.8, 18.3) compared with only 15% of Clinton supporters (mean = 0.51, 95% CI: 0.39, 0.64). Consumption of articles from pro-Clinton fake news websites was much lower, though also somewhat divided by candidate support. Clinton supporters were modestly more likely to have visited pro-Clinton fake news websites (11.3%, mean articles: 0.85) versus Trump supporters (2.8%, mean articles: 0.05). The di↵erences by candidate preference that we observe in fake news website visits are even more pronounced when expressed in terms of the composition of the overall news diets of each group. Articles on fake news websites represented an average of 6.2% of the pages visited on sites that focused on news topics among Trump supporters versus 0.8% among Clinton supporters.
The differences we observe in visits to pro-Trump and pro-Clinton fake news websites by candi- date support are statistically significant in OLS models even after we include standard demographic and political covariates, including a standard scale measuring general political knowledge (Table 1).4 Trump supporters were disproportionately more likely to consume pro-Trump fake news and less likely to consume pro-Clinton fake news relative to Clinton supporters, supporting a selective exposure account. Older Americans (age 60 and older) were also much more likely to visit fake news conditional on these covariates, including pro-Trump fake news.
We also find evidence of selective exposure within fake news; pro-Trump voters di↵erentially visited pro-Trump fake news websites compared with pro-Clinton websites. To help demonstrate this, we employ a randomization inference-style approach in which we randomly permute the coding (pro-Trump or pro-Clinton) of visits to fake news websites by Trump supporters in our panel. Total consumption of articles from pro-Trump fake news websites is as frequent as we observe or greater in 4 of 1,000 simulations (p = 0.004 one-sided; see the Supplementary Materials). We thus reject the null hypothesis that Trump supporters are no more likely to visit pro-Trump fake news content than pro-Clinton fake news content.
Finally, we show that individuals who engage in high levels of selective exposure to online news in general are also di↵erentially likely to visit fake news websites favoring their preferred candidate. In general, fake news consumption seems to be a complement to, rather than a substitute for, hard news — visits to fake news websites are highest among people who consume the most hard news and do not measurably decrease among the most politically knowledgeable individuals.
Read the full article as a Dartmouth College paper.
The anthropocentric idealism of Judith Butler
Justin EH Smith, 6 January 2018
Those who, with Judith Butler, deny a distinction between sex and gender, however they may think of themselves, are either classical philosophical idealists, or they are anthropocentrist human-exceptionalists, and thus heirs to the legacy of the Christian theological model of the human being.
Consider this from a recent online ‘syllabus’: ‘Butler proves that the distinction between sex and gender does not hold. A sexed body cannot signal itself as different sexually without cultural gender categories, and the idea that sex comes before cultural factors (which are believed to be only overlaid on top of sex), is disproven in this book. Gender is performance, there’s no solid universal gender basis beneath these always creative performances. There is no concrete sexed body without constructed human categories to interpret it.’
At least since Fichte dispensed with the Kantian thing-in-itself, we have been aware of the possibility that there is no concrete external world without human categories to interpret it. That is, if we acknowledge that the world beyond our experience is entirely inaccessible to us by definition, then there are good arguments to the effect that we should not believe it exists at all.
But the philosophical possibility of absolute idealism in no way prevents us from continuing on with our research programmes in, say, fluid dynamics or vulcanology. What makes the human body so different?
The concrete sexed human body is, alongside volcanoes, worms, etc., a thing of nature– unless, that is, you are an idealist and you think there is no such thing as nature at all. But in any case, the sexed human body, the volcano, and the worms, whether ‘constructs’ or natural objects, can only have the same ontological status– unless, that is, you are a human exceptionalist.
Read the full article on Justin Smith’s blog.
Digging into the myth of Timbuktu
Peter Coutros, Sapiens, 31 October 2017
Although the oral history of the region suggested that Timbuktu was first settled by Tuareg nomads in the 12th century, preliminary archaeological research during the 1980s suggested a much earlier occupation. In 2008, a team of archaeologists from Yale University and the Malian Ministry of Culture’s Direction Nationale du Patrimoine Culturel set out to investigate the murky origins of the fabled city.
Led by archaeologist Douglas Post Park, a descendent of the famed Scottish explorer Mungo Park, the research team eventually located hundreds of settlements dating at least as far back as 500 B.C. As co-director of the project, I worked on Park’s team for three seasons of fieldwork, surveying, excavating, and analyzing thousands of ceramic sherds. Our findings pushed back the age of Timbuktu by more than 1,500 years and revealed a remarkable and unique type of urban landscape in which people structured their lives around the pulses of the Niger River’s flood seasons. These pulses were connected to many aspects of daily life, such as rice and millet farming; sheep, goat, and cattle herding; and of course, hunting and fishing. When the floodwaters inundated the floodplain, people moved to higher ground, likely clustering together in extended family units. As the waters receded, people again spread out across the landscape, planting their crops in the rejuvenated soils. The communities living in the region likely developed this routine somewhere around 200 B.C. and continued to thrive within this social structure until around A.D. 900.
The research by Park’s team is just one of a growing number of examples that suggest the typical model for ancient cities might not be as typical as once believed. The monumental architecture, concentrated wealth, and dynastic kings central to ancient cities like Uruk (in present-day Iraq) and Memphis, Egypt, have defined the archetypal early city. But at Timbuktu, the urban area developed into something unique. From the 1,100-year span during which this society thrived, there is so far no evidence of overt hierarchical structures. Power was conveyed in ways that were very different from those of despotic rulers in Mesopotamia, Egypt, or other urban centers in the ancient world.
Read the full article in Sapiens.
The forgotten origins of politics in sport
Kenneth Cohen, Slate, 2 January 2018
The Civil War is often cited as the moment when national unity became the political goal of sports. ‘The Star-Spangled Banner’ was famously played before a ballgame for the first time in 1862, and historians often note the instances when Northern and Southern soldiers played baseball together. But much more common were the boxing matches that reflected the era’s political and ethnic factions. Irish-born champion John Morrissey rose up through the Democratic ranks by representing the party in fights and eventually won a seat in Congress in 1867, the first sports star to achieve national office.
It was only after the war that the relationship between sports and politics began to change. Black Americans had made up a large percentage of the early republic’s jockeys, boxers, and sports fans, although they had never been permitted to make explicitly political statements at sporting events. But after the Civil War, black stars and fans had (at least nominally) many of the same rights as white men, including the right to vote. In the decades following the war, ballplayers like Octavius Catto, jockeys like Jimmy Winkfield and Jimmy Lee, and boxer Jack Johnson made white men worry about how black athletes might channel their popularity into politics. If everyone wanted to ‘pat the back of this ignorant colored lad’—as one writer described a mob celebrating Jimmy Lee in 1907—then the old politicization of sports suddenly presented a new threat to the racial order. As another correspondent put it, sporting environments needed to dissuade black men from politicizing sports so that they ‘did not strut about the lawns to pre-empt the good seats in the grand stand or go about the resorts of white men flaunting loud stripes and checks and the fifteenth amendment’ that granted them voting rights. For many white Americans, sport was no longer an amusing way to drum up votes once it promised to mobilize those who did not share their Northern European heritage.
A fear of black political power gave new urgency to a reform movement that had been trying for generations to separate politics from sports. As far back as the colonial period, critics had warned voters of being ‘bribed, or drammed, or frolicked, or bought, or coaxed, or threatened out of your Birthright [to vote].’ Those warnings suddenly gained traction at the turn of the 20th century, as a string of corruption cases and a growing pool of black and immigrant voters scared many middle-class Americans. The result was a wave of legislation that outlawed gambling on elections, banned alcohol sales near polling places, instituted the secret ballot, and enacted more stringent voter registration. For better and worse, these changes sobered the experience of American politics and greatly reduced voter turnout.
As political events became less sporting, sporting events became less politically divisive. Without politicians and parties driving the sporting experience, entrepreneurs embraced celebratory nationalism. By the 1910s, partisan brouhahas had evaporated almost entirely from sporting events. Elected officials began to throw out first pitches instead of campaigning on baseball cards. Games became a place for honoring office holders rather than arguing about who should be in office. This was the era when ‘The Star-Spangled Banner’ began to be played regularly before sporting events, beginning with a 1918 World Series game in Chicago—one day after a federal building there had been bombed, allegedly by immigrant anarchists and labor activists. National pride and order replaced disorderly political division only once blacks and immigrants became political forces to be reckoned with.
Read the full article in Slate.
Thoughts made visible
Noga Arikha, Lapham’s Quarterly, Winter 2017
Yet it was always the case for philosophers that thinking about the nature of thought was dizzying, and that the mind could not, indeed should not, see itself. As Saint Augustine wrote of memory in his Confessions, ‘Who can plumb its depths? And yet it is a faculty of my soul. Although it is part of my nature, I cannot understand all that I am.’ That question is still relevant. I cannot understand all that I am resonates within our modernity. Philosophical questions regarding how far one can look into the mind remain wide open. They also inform what it means to look into the brain.
The brain started coming into focus about five hundred years ago. It was an anatomical revolution that began in Italy and the Lowlands, with the likes of the anatomist Vesalius, who was able to revise central dicta of Galen thanks to the practice of human dissection, which had been forbidden in Galen’s day. The inauguration of neurological research was one aspect of the scientific revolution that, beginning in the seventeenth century, established the bases of modern science in Europe. The drawings of the brain by architect Christopher Wren in Cerebri anatomeby Oxford physician Thomas Willis, who coined the term neurologie, were an extraordinary accomplishment resulting from a technological innovation: the preservation of brains in alcohol. Until then a dead brain was a disintegrating gelatinous mass that yielded few secrets about its physiology.
Willis denied he was investigating the rational soul, which for him remained the province of religion. Until the eighteenth century, what we call science was termed natural philosophy, and its practitioners were preoccupied with issues of metaphysics and ethics—the soul, free will, determinism, and the nature of knowledge itself. Across the Channel, French philosopher René Descartes argued that mechanical operations alone could account for the material body’s perception, sensation, and movement, without the assistance of an immaterial soul. But he also turned the admittedly baffling question of how the material and immaterial can interact into the fulcrum of his provocative mind-body dualism. His claim that soul and body were entirely separate sparked considered response—and awaited resolution.
We have, on the whole, left that dualism behind. The process of resolving Descartes’ question began once a materialist understanding of the mind became politically and ideologically acceptable. The eighteenth century saw increasingly detailed discoveries in the anatomy and physiology of humans and animals. Investigations in neuroanatomy, neurophysiology, and neuropsychology proliferated in the nineteenth century, benefiting from discoveries in chemistry. Clinical observations of patients in whom a vascular accident, say, had led to a visible deficiency started filling in the picture of how the brain worked. Paul Broca individuated in 1861 an area in the frontal part of the brain’s left hemisphere that was crucial for speech by performing autopsies on patients who had lost their capacity to speak. Here was a potent instance of how higher mental abilities were the outcome of brain activity. The region is now called Broca’s area. Many such areas were identified, mapped, and named, like so many regions on the dark side of the moon. Debates raged between those like Broca, who believed the functions were localized, and their opponents, who saw the brain as a systemic whole, reacting in part to the charlatanism of phrenology, which reached its peak in popularity during the 1820s and 1830s. But localization emerged triumphant.
Science had branched off from philosophical speculation about the mind and knowledge, which itself had turned away from natural philosophy. But once psychology became scientific, notably in 1890 with William James’ Principles of Psychology, the study of the mind started to rejoin the clinical study of the brain. The plan was now to study by scientific means perception, sensation, memory, rational thought, attention, volition, emotions, anxiety, aggression, depression, the sense of self, even consciousness. And the disciplines that constitute the neurosciences, from the molecular to the psychological level, are now engaged in this exploration—in some ways, a continuation of philosophy by empirical means.
Read the full article in the Lapham’s Quarterly.
The first pass
Conor Pope, Medium, 28 December 2017
Blackburn’s football history is a little less celebrated than Barcelona’s, but the town has had an important role in both the development and the export of the game, and many clues of that remain. Dynamo Kiev, the most successful club in Ukraine and the former Soviet Union, still play in blue and white because it was founded by a Blackburn Rovers fan. Grasshopper Zürich, the most successful club in Switzerland, still plays in what is essentially a Rovers kit for the same reason.
In Fabregas’ native Spain, Athletic Bilbao and Atletico Madrid both only lost their Rovers-inspired blue and white halves when the person they sent to England to pick up more Blackburn shirts for each club returned with Southampton kits instead. Atletico retain the blue shorts from the original kit to this day.
But global Blackburn couture isn’t the only thing the town gave to the beautiful game. More important is the FA Cup final of 1883, where the now-defunct team Blackburn Olympic beat the Old Etonians to become the first northern football club to win the competition, the first working class club to win the competition and — most importantly — the first team to win the cup by employing a revolutionary new tactic. It was called ‘passing’. You may have heard of it…
Olympic were not the first to use passing — as well as fellow Lancastrian teams, Queen’s Park in Scotland were a notable passing side — but were the first to see real success by employing it as a tactic. Taking on a generally fitter, upper class side who employed a basic ‘kick-and-rush’ approach, Olympic players would knock the ball past their opponents to teammates who had more space, tiring the other team out: Wilson says that the Old Etonians were ‘unfamiliar’ and unable to cope with Olympic’s approach of ‘hitting long, sweeping passes from wing to wing’. They won with a goal deep in extra time, after a ball from the right flank found Jimmy Costley in space on the left. It was not passing for beauty’s sake, but for winning’s. The Old Etonians were not outfought by their class inferiors, but outthought.
This, too, is political. To dare not to play the proper Etonian way was essentially a class rebellion. The Old Etonians looked down on passing as ungentlemanly and against the spirit of the game. The way they played was likely not that different from the how the uncultured Eton wall game looks: a sport in which decades literally go by without a point being scored.
Olympic, an XI of cotton weavers and plumbers, played as a team. Nothing could be achieved without the solidarity of the block. It rejected moments of individual brilliance in favour of collective action. Upon returning to Blackburn, the side was greeted with street celebrations and brass bands.
Read the full article on Medium.
The best opera recording ever is Maria Callas
singing ‘Tosca.’ Hear why.
Anthony Tommasini, New York Times, 29 December 2017
Every soprano who sings Tosca tries to make her opening words — frantic calls of her lover’s name, before she’s even onstage — sound suspicious. The title character of Puccini’s great opera is an acclaimed prima donna in Rome in 1800. Tosca is passionate and jealous. So she must be wondering why the door to the church where her beloved Mario is painting a mural is locked. And who did she just hear him whispering with?
But on Maria Callas’s classic 1953 recording of the opera, she gives us more than jealousy. Her opening cries of ‘Mario’ are also panicked, almost desperate. A touch of fragile neediness comes through as this rattled Tosca calls out Mario’s name three more times.
This fleeting episode, only a few seconds long, is one of countless indelible moments in an account of ‘Tosca’ that has often been called the greatest opera recording ever made. Even though it was done under studio conditions, Callas, Giuseppe di Stefano (as the idealistic Mario) and Tito Gobbi (as the villainous police chief Scarpia) are thrillingly alive and subtle for the towering maestro Victor de Sabata and the forces of the Teatro alla Scala in Milan. It’s hard to think of a recording of any opera that nails a work so stunningly, that seems so definitive.
Read the full article in the New York Times.
The images are, from top down: Illustration of quantum computing from PBS; ‘St Francis Driving Out the Demons of Arezzo’ by Giotto; ‘Charlemagne’ by Caspar Johann Nepomuk Scheuren; Jack Johnson fighting James Jeffires, 1910 (via Fight City).