Longtermism

The Unignorable Chinese Market: Unlocking Weixin's Potential for Business Growth

Retrieved on: 
Sunday, June 25, 2023

Tencent Marketing Solution also utilized the occasion to enhance its Joint Business Partnership (JBP), reinforcing its collaborative efforts with partners as part of its ongoing commitment.

Key Points: 
  • Tencent Marketing Solution also utilized the occasion to enhance its Joint Business Partnership (JBP), reinforcing its collaborative efforts with partners as part of its ongoing commitment.
  • The thriving digital ecosystem is one of the key drivers behind the strong performance of the Chinese market.
  • Brands can leverage Weixin Mini-Program, paid traffic, Weixin Search, and travel retail to drive business growth.
  • Ethen Zhang, Deputy General Manager of Channel Sales Department, Tencent Marketing Solution, emphasized the collaborative potential.

'Effective altruism' has caught on with billionaire donors – but is the world's most headline-making one on board?

Retrieved on: 
Saturday, April 15, 2023

Part of such notions’ appeal may be the argument that they’re not just exciting, or profitable, but would benefit humanity as a whole.

Key Points: 
  • Part of such notions’ appeal may be the argument that they’re not just exciting, or profitable, but would benefit humanity as a whole.
  • But what do these phrases really mean – and how does Musk’s record stack up?

The greatest good

    • In simple terms, utilitarianism holds that the right action is whichever maximizes net happiness.
    • Like any moral philosophy, there is a dizzying array of varieties, but utilitarians generally share a couple of important principles.
    • This is often summed up by the expression “each to count for one, and none for more than one.” Finally, utilitarianism ranks potential choices based on their outcomes, usually prioritizing whichever choice would lead to the greatest value – in other words, the greatest pleasure, the least amount of pain or the most preferences fulfilled.

Utilitarianism 2.0?

    • Utilitarianism shares a number of features with effective altruism.
    • In addition, both utilitarianism and effective altruism are agnostic about how to achieve their goals: what matters is achieving the greatest value, not necessarily how we get there.

Long-term view

    • Longtermists, including many people involved in effective altruism, believe that those obligations matter just as much as our obligations to people living today.
    • In this view, issues that pose an existential risk to humanity, such as a giant asteroid striking earth, are particularly important to solve, because they threaten everyone who could ever live.
    • Longtermists aim to guide humanity past these threats to ensure that future people can exist and live good lives, even in a billion years’ time.

Measuring Musk

    • Musk has claimed that MacAskill’s effective altruism “is a close match for my philosophy.” But how close is it really?
    • It’s hard to grade someone on their particular moral commitments, but the record seems choppy.
    • To start, the original motivation for the effective altruism movement was to help the global poor as much as possible.
    • Musk did not, the public record suggests, donate to the World Food Program, but he did soon give a similar amount to his own foundation – a move some critics dismissed as a tax dodge, since a core principle of effective altruism is giving only to organizations whose cost-effective impact has been rigorously studied.

Futuristic solutions

    • In fact, he has suggested that negative media coverage of autonomous driving is tantamount to killing people by dissuading them from using self-driving cars.
    • In this view, Tesla seems to be an innovative means to a utilitarian end.
    • His Boring Company’s attempts to build tunnels under Los Angeles, meanwhile, have been criticized as expensive and ineffecient.
    • Answering this question requires thinking about three core questions: Are their initiatives trying to do the most good for everyone?

Let's base AI debates on reality, not extreme fears about the future

Retrieved on: 
Monday, April 3, 2023

The letter, published by the non-profit Future of Life Institute, has asked for all AI labs to stop training AI systems more powerful than GPT-4, the model behind ChatGPT.

Key Points: 
  • The letter, published by the non-profit Future of Life Institute, has asked for all AI labs to stop training AI systems more powerful than GPT-4, the model behind ChatGPT.
  • Longtermism is the belief that artificial intelligence poses long-term or existential risks to humanity’s future by becoming an out-of-control superintelligence.
  • AI fantasies are one of many fears in Silicon Valley that can lead to dark prophecies.
  • The open letter sees AI language technology like ChatGPT as a cognitive breakthrough — something that allows an AI to compete with humans at general tasks.
  • And when it comes to privacy matters, ChatGPT’s approach is hard to distinguish from another AI application, Clearview AI.
  • The letter follows an old dynamic that my co-author and I identify in a forthcoming peer-reviewed chapter about AI governance.
  • There is a tendency to view AI as either an existential risk or something mundane and technical.
  • These letters call for reforms and a more robust approach to AI governance to protect those being affected by it.