Effective altruism: how to speculate with morality

The crypto trader Sam Bankman-Fried, who was arrested in December after the bankruptcy of his company FTX on suspicion of fraud and money laundering and is currently on trial, not only attracted attention with his demonstrative refusal to appear well-groomed; but also because of that His uncombed hair, baggy t-shirts and shorts conveyed the informality of a genius who doesn't need to heed etiquette
But they were also the working clothes of the unbridled benefactor that Bankman-Fried posed as He was the hero of the Effective Altruism movement, a sort of modern day Robin Hood
At the beginning of last year he announced that he wanted to donate 99 percent of his income, and at the peak of his career he is said to have owned assets of $26 5 billion
After all, he actually put 160 million into over 100 aid projects - from fighting the pandemic to biotech start-ups to scholarships for schoolchildren Bankman-Fried is probably the most prominent EA, as the supporters of effective altruism call themselves
But even independently of his lightning career, the idea has made a remarkable development in recent years Its founder is the young Scottish moral philosopher William MacAskill (32 years old today), who teaches in Oxford and, together with his fellow student Toby Ord and a few like-minded people, coined the term in 2011
He could also have called them "Rational Compassion," "Optimal Philanthropy," or "Evidence-Based Charity " but the phrase “effective altruism” won a poll in 2011—and has since proven to be a surprisingly catchy formula
At its core, their utilitarian principle states that the inhabitants of the western world, in which even people below the poverty line are richer than 85 percent of the people on earth, are morally obliged not only to do good, but also to do so with the maximum possible Effectiveness From a moral-philosophical concept to a lifestyle brand The pioneer of the idea is the Australian utilitarian Peter Singer, MacAskill and his followers have consistently advanced, networked and marketed his approach
They hit a nerve, especially in the Californian tech scene, i e
in that milieu whose members are characterized by above-average income, bad conscience and weakness for pragmatic solutions Effective altruism fits perfectly with the idealism of Silicon Valley, which wants to remedy the symptoms of social problems with great zeal and vanity without ever shaking up the structural causes on which one's own prosperity ultimately rests
Today, Effective Altruism has evolved from a moral-philosophical concept to a successful lifestyle brand More than 7,000 members of the movement have now pledged to donate at least ten percent of their income, and MacAskill himself gives half of his earnings
Journalist Ezra Klein is a follower, as is philosopher and podcaster sam harris There are more than 200 EA groups worldwide, there are “retreats” and “unconferences” somewhere in the world every week, like this weekend in Berlin-Wannsee, where the participants not only learn how to do good without wasting money on overpriced aid measures over vegan food between parkour workshops and forest walks
But also how they use their own talents profitably Career advice for young people who want to improve the world Because MacAskill also founded an organization for the "ethical life optimization" of its followers, named after the average working life
"80,000 Hours" is the task of career advice for young people who want to improve the world Above all, graduates of renowned universities are recommended to take a well-paid job on Wall Street and donate the money they earn, rather than getting personally involved in charities
"Earning to give" is the motto with which MacAskill initially even justified a career in the oil industry when in doubt, after all, someone else would do the job otherwise Bankman-Fried also personally suggested a career in finance for MacAskill when he was a freshman at MIT looking for a job
In some cases, the cool calculation with which effective altruism contrasts well-intentioned aid projects with projects that are actually effective is quite revealing Instead of donating for guide dogs, which cost $40,000 each to train, MacAskill calculates that 40 people who are at risk of going blind could be operated on
Instead of encouraging children in poor countries to go to school with uniforms, their presence could be increased twenty-fold with deworming treatments The most popular positive examples, on the other hand, include mosquito nets to prevent malaria, vitamin A tablets or vaccinations for children
Save the Shrimp! The grotesque excesses that such unbridled utilitarianism can lead to are shown by the projects that in recent years have been increasingly crowding out direct help for those in need The delusion of maximizing the supposedly assessable happiness in life almost inevitably leads to a special soft spot for animal welfare, especially for livestock
After all, the hundreds of billions of animals that live on factory farms far outnumber the human population, so saving them promises fantastic value for money The smaller the animals, the more effective: The protection of shrimps, for example, is particularly good for the moral balance, of which 400 billion have to live and die under painful conditions every year
The calculus is similarly questionable when it comes to funds for the many organizations of the movement When asked for particularly effective investment tips, MacAskill unabashedly recommends donations to his own foundations, citing the supposedly considerable leverage as the reason
For example, anyone who invests in the work of his “Giving What We Can” foundation, he claims, enables experienced fundraisers to mobilize many times that amount in donations According to "80 000 Hours", "Building Effective Altruism" is even number 3 on the "List of the Most Urgent World Problems"
behind the risks of artificial intelligence and those of catastrophic pandemics The waterhead of administration and organization, which is criticized elsewhere as inefficient bureaucracy, is glorified at EA as a miracle cure for increasing donations, as "meta charity" with a verifiable multiplier effect
The movement, like a modern sale of indulgences, pays into the accounts of its own organizations with inspiration Our Disenfranchised Descendants The fact that the supposed effectiveness of altruism has meanwhile become a mere assertion is mainly due to a shift in priorities in MacAskill's thinking, which became apparent at the latest in his book "What We Owe The Future", which was published in September
In it he propagates the complete turn to the theory (and practice) of "Longtermism", an idea which has long been seething in the orbit of Californian philanthropists as well as in the circles of Oxford philosophers around Toby Ord, Nick Bostrom and the "Future of Humanity Institute" Longtermism is based on the thesis that the people of the future are morally no less important than the current generation; and because their quantity is theoretically infinitely larger, their rescue has priority
MacAskill calls our descendants “the silent billions” and compares them to the lawless groups that had to fight for their interests for a long time in the past Of course, it is entirely contemporary to broaden the perspective of the future
In a way, long-termism is now common sense, after all, even the Federal Constitutional Court recently declared the “fundamental right to a decent future” But MacAskill, his wealthy friends, and more and more of the foundations and lobbyists they fund are stretching their horizons so far into the future that their utilitarianism is finally becoming speculative—which stock-market conditioned clientele may find quite appealing
Donations against the robot uprising The billions of EA donors now hardly go to the global fight against poverty and disease But in the fight against new pandemics or in initiatives to develop human-friendly ai (which, as can be seen from the example of the Open-AI Foundation, are the most determined to advance the future they warn about)
And in countless institutes that are well paid to think about what dangers, which are still unknown today, could lurk in the future beyond the robot uprising Morally incomparably fatal, this is one of the bizarre punchlines of the ethics futurist Bostrom, for example the dawdling that mankind shows in the colonization of space: Because while we are watching Netflix or fighting cancer, billions of suns in the universe are heating up empty spaces in which sentient beings could live a life worth living
In our super cluster of galaxies alone, as Bostrom wrote in his 2003 essay “Astronomical Waste”, 1038 lives were lost in every century of missed colonization of space, which is still hundreds of quadrillions per second With such references to potential lucky returns in an indefinite future, moral effectiveness can be asserted for any long-term project, no matter how outlandish
If necessary, some institute is subsidised, which examines and quantifies the danger of a hypothetical apocalypse On the other hand, anyone who is still concerned with the mundane problems of the present is only demonstrating his limited horizons
No wonder the famous escapist Elon Musk praised MacAskill's book as "close to his philosophy" In the end, effective altruism today is nothing more than a morally charged investment fund, a pyramid scheme with which one can turn away from the current problems of humanity with a clear conscience
In principle, saving the future always pays off – most of all for managers in the present that even the famous escapist Elon Musk praised MacAskill's book as "close to his philosophy"
In the end, effective altruism today is nothing more than a morally charged investment fund, a pyramid scheme with which one can turn away from the current problems of humanity with a clear conscience In principle, saving the future always pays off – most of all for managers in the present
that even the famous escapist Elon Musk praised MacAskill's book as "close to his philosophy" In the end, effective altruism today is nothing more than a morally charged investment fund, a pyramid scheme with which one can turn away from the current problems of humanity with a clear conscience
In principle, saving the future always pays off – most of all for managers in the present
Post a Comment for "Effective altruism: how to speculate with morality"