Ethicist

In a future with more ‘mind reading,’ thanks to neurotech, we may need to rethink freedom of thought

Retrieved on: 
Tuesday, April 9, 2024

He warned that writing undermines memory – that it is nothing but a reminder of some previous thought.

Key Points: 
  • He warned that writing undermines memory – that it is nothing but a reminder of some previous thought.
  • Today, the U.S. is in the middle of a similar panic over TikTok, with critics worried about its impact on viewers’ freedom of thought.
  • Brain-computer interfaces, called BCIs, have rightfully prompted debate about the appropriate limits of technologies that interact with the nervous system.
  • But as my research on neurorights argues, protecting the mind isn’t nearly as easy as protecting bodies and property.

Thoughts vs. things

  • The body has clear boundaries, and things that cross it without permission are not allowed.
  • It is normally obvious when a person violates laws prohibiting assault or battery, for example.
  • The same is true about regulations that protect a person’s property.
  • Instead, a person’s thoughts are largely the product of other peoples’ thoughts and actions.
  • Everything from how a person perceives colors and shapes to our most basic beliefs are influenced by what others say and do.
  • If I’m not allowed to influence others’ thoughts, then I can never leave my house, because just by my doing so I’m causing people to think and act in certain ways.

Neurotech and control

  • People may not be able to completely control what gets into their heads, but they should have significant control over what goes out – and some people believe societies need “neurorights” regulations to ensure that.
  • Neurotech represents a new threat to our ability to control what thoughts people reveal to others.
  • There are ongoing efforts, for example, to develop wearable neurotech that would read and adjust the customer’s brainwaves to help them improve their mood or get better sleep.
  • For example, nations could prohibit companies that make commercial neurotech devices, like those meant to improve the wearer’s sleep from storing the brainwave data those devices collect.
  • Yet I would argue that it may not be necessary, or even feasible, to protect against neurotech putting information into our brains – though it is hard to predict what capabilities neurotech will have even a few years from now.
  • But one thing is certain: With or without neurotech, our control over our own minds is already less absolute than many of us like to think.


Parker Crutchfield does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Children are expensive – not just for parents, but the environment − so how many is too many?

Retrieved on: 
Thursday, February 15, 2024

Natural habitats are being decimated, the world is growing hotter, and scientists fear we are experiencing the sixth mass extinction event in Earth’s history.

Key Points: 
  • Natural habitats are being decimated, the world is growing hotter, and scientists fear we are experiencing the sixth mass extinction event in Earth’s history.
  • Under such circumstances, is it reasonable to bring a child into the world?
  • Recently, my work has explored questions where these two fields intersect, such as how climate change should affect decision-making about having a family.

A lifelong footprint

  • Many people who care about the environment believe they are obligated to try to reduce their impact: driving fuel-efficient vehicles, recycling and purchasing food locally, for example.
  • So, if you think you are obligated to do other activities to reduce your impact on the environment, you should limit your family size, too.
  • In response, however, some people may argue that adding a single person to a planet of 8 billion cannot make a meaningful difference.

Crunching the numbers

  • For example, statistician Paul Murtaugh and scientist Michael Schlax attempted to estimate the “carbon legacy” tied to a couple’s choice to procreate.
  • They estimated the total lifetime emissions of individuals living in the world’s most populous 11 countries.
  • Driving a more fuel-efficient car, on the other hand – getting 10 more miles to the gallon – would save only 148 metric tons of CO2-equivalent.
  • He found that the average American contributes roughly one two-billionth of the total greenhouse gases that cause climate change.

Collective toll

  • One common thought in ethics is that people should avoid participating in enterprises that involve collective wrongdoing.
  • Suppose someone considers making a small donation to an organization that they learn is engaged in immoral activities, such as polluting a local river.
  • We could reason the same way about procreation: Overpopulation is a collective problem that is degrading the environment and causing harm, so individuals should reduce their contribution to it when they can.

Moral gray zone


But perhaps having children warrants an exception. Parenthood is often a crucial part of people’s life plans and makes their lives far more meaningful, even if it does come at a cost to the planet. Some people believe reproductive freedom is so important that no one should feel moral pressure to restrict the size of their family.

  • Is there a way to balance the varied and competing moral considerations in play here?
  • I believe this allows a couple an appropriate amount of reproductive freedom while also recognizing the moral significance of the environmental problems linked to population growth.
  • It is also possible, as ethicist Kalle Grill has argued, that none of these positions gets the moral calculus exactly right.
  • Regardless, it is clear that prospective parents should reflect on the moral dimensions of procreation and its importance to their life plans.


Trevor Hedberg does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Several companies are testing brain implants – why is there so much attention swirling around Neuralink? Two professors unpack the ethical issues

Retrieved on: 
Wednesday, February 14, 2024

Academic and commercial groups are testing “brain-computer interface” devices to enable people with disabilities to function more independently.

Key Points: 
  • Academic and commercial groups are testing “brain-computer interface” devices to enable people with disabilities to function more independently.
  • In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain.

How does a brain chip work?

  • Subjects in the company’s PRIME study – short for Precise Robotically Implanted Brain-Computer Interface – undergo surgery to place the device in a part of the brain that controls movement.
  • The chip records and processes the brain’s electrical activity, then transmits this data to an external device, such as a phone or computer.

A few companies are testing BCIs. What’s different about Neuralink?


Noninvasive devices positioned on the outside of a person’s head have been used in clinical trials for a long time, but they have not received approval from the Food and Drug Administration for commercial development.
There are other brain-computer devices, like Neuralink’s, that are fully implanted and wireless. However, the N1 implant combines more technologies in a single device: It can target individual neurons, record from thousands of sites in the brain and recharge its small battery wirelessly. These are important advances that could produce better outcomes.

Why is Neuralink drawing criticism?

  • Musk announced the company’s first human trial on his social media platform, X – formerly Twitter – in January 2024.
  • Neuralink did not register at ClinicalTrials.gov, as is customary, and required by some academic journals.
  • Neuralink, on the other hand, embodies a private equity model, which is becoming more common in science.
  • However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported.
  • In a separate incident also reported by Reuters, the Department of Transportation fined Neuralink for violating rules about transporting hazardous materials, including a flammable liquid.

What other ethical issues does Neuralink’s trial raise?

  • In particular, it helps people recover a sense of their own agency or autonomy – one of the key tenets of medical ethics.
  • With BCIs, scientists and ethicists are particularly concerned about the potential for identity theft, password hacking and blackmail.
  • Given how the devices access users’ thoughts, there is also the possibility that their autonomy could be manipulated by third parties.

What’s next?

  • Musk has said his ultimate goal for BCIs, however, is to help humanity – including healthy people – “keep pace” with artificial intelligence.
  • Some types of supercharged brain-computer synthesis could exacerbate social inequalities if only wealthy citizens have access to enhancements.
  • For patients whose access to a device is tied to a research study, the prospect of losing access after the study ends can be devastating.


The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

SEALSQ Unveils AIoT Strategy with WISeAI.IO at Davos AI Roundtable

Retrieved on: 
Thursday, January 18, 2024

Davos, Switzerland, Jan. 18, 2024 (GLOBE NEWSWIRE) -- SEALSQ Corp ("SEALSQ" or "Company") (NASDAQ: LAES), a company that focuses on developing and selling Semiconductors, PKI and Post-Quantum technology hardware and software products, today announced its innovative AI strategy during the Artificial Intelligence (AI) Roundtable at Davos.

Key Points: 
  • Davos, Switzerland, Jan. 18, 2024 (GLOBE NEWSWIRE) -- SEALSQ Corp ("SEALSQ" or "Company") (NASDAQ: LAES), a company that focuses on developing and selling Semiconductors, PKI and Post-Quantum technology hardware and software products, today announced its innovative AI strategy during the Artificial Intelligence (AI) Roundtable at Davos.
  • The AIoT system functions as the central brain of the SEALSQ Ecosystem, which currently consists of over 1.6 billion semiconductor powered devices.
  • In conjunction with its AI strategy announcement, SEALSQ hosted a successful roundtable discussion titled "Decentralization: AI Unleashed: Ensuring Safety and Leveraging Decentralization" at the Hotel Europe in Davos, featuring a diverse panel of experts, including AI researchers, ethicists, technology professionals, and policymakers.
  • Establishment of Ethical AI Guidelines: The roundtable emphasized the need for universal ethical standards in AI development and deployment.

WISeKey and Cybersecurity Tech Accord to Host an Influential Roundtable Discussion During Davos 2024 Event

Retrieved on: 
Friday, January 5, 2024

The roundtable, themed “AI Unleashed: Ensuring Safety and Leveraging Decentralization,” is a landmark gathering that will dissect the burgeoning role of cybersecurity within Artificial Intelligence.

Key Points: 
  • The roundtable, themed “AI Unleashed: Ensuring Safety and Leveraging Decentralization,” is a landmark gathering that will dissect the burgeoning role of cybersecurity within Artificial Intelligence.
  • It will aim to navigate the complex terrain of AI, probing the ethical, safety, and privacy challenges that accompany AI's integration into various sectors.
  • Notably, the event features a diverse panel of leading minds in AI, encompassing AI researchers, ethicists, technology mavens, and influential policymakers.
  • WISeKey cordially invites media representatives, industry experts, and all stakeholders interested in the intersection of AI, cybersecurity, and ethics to join this groundbreaking event.

AI is here – and everywhere: 3 AI researchers look to the challenges ahead in 2024

Retrieved on: 
Wednesday, January 3, 2024

The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination.

Key Points: 
  • The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination.
  • It also saw boardroom drama in an AI startup dominate the news cycle for several days.
  • We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.
  • Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder 2023 was the year of AI hype.
  • One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education.
  • The more people understand how AI works, the more empowered they are to use it and to critique it.
  • I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists.


Anjana Susarla receives funding from the National Institute of Health and the Omura-Saxena Professorship in Responsible AI Casey Fiesler receives funding from the National Science Foundation, and is currently a funded Visiting Fellow with the Notre Dame-IBM Tech Ethics Lab. Kentaro Toyama's research is funded in part by the National Science Foundation.

Global Policy Advisors to Host Session on Corporate Governance Lessons from OpenAI

Retrieved on: 
Wednesday, November 22, 2023

NEW YORK, Nov. 21, 2023 (GLOBE NEWSWIRE) -- Global Policy Advisors has announced a seminar titled “Navigating Corporate Governance: Insights from OpenAI's Journey.” The online event, scheduled for 2 p.m. EST on November 30, is designed to shed light on the complex emerging corporate governance issues, using OpenAI's experiences as a primary case study.

Key Points: 
  • NEW YORK, Nov. 21, 2023 (GLOBE NEWSWIRE) -- Global Policy Advisors has announced a seminar titled “Navigating Corporate Governance: Insights from OpenAI's Journey.” The online event, scheduled for 2 p.m. EST on November 30, is designed to shed light on the complex emerging corporate governance issues, using OpenAI's experiences as a primary case study.
  • The interactive seminar will be hosted by Salar Ghahramani , the founder of Global Policy Advisors and a corporate governance expert.
  • The session will address corporate governance best practices, managing conflicts of interest in rapidly evolving industries, and the multifaceted fiduciary roles that are integral to ensuring ethical, transparent, and effective decision-making across various business sectors.
  • This session is particularly beneficial for business leaders, board members, and corporate governance professionals seeking to navigate the complex landscape of modern corporate governance.

Operation HOPE’s John Hope Bryant and Open AI’s Sam Altman Announce Formation of First-of-Its-Kind AI Ethics Council at 2023 Annual Meeting of the Hope Global Forums in Atlanta

Retrieved on: 
Tuesday, December 12, 2023

On stage yesterday afternoon at the 2023 Annual Meeting of the Hope Global Forums in Atlanta, GA, Operation HOPE’s John Hope Bryant and Open AI’s Sam Altman announced the formation of a first-of-its-kind Council on Artificial Intelligence powered by Operation HOPE.

Key Points: 
  • On stage yesterday afternoon at the 2023 Annual Meeting of the Hope Global Forums in Atlanta, GA, Operation HOPE’s John Hope Bryant and Open AI’s Sam Altman announced the formation of a first-of-its-kind Council on Artificial Intelligence powered by Operation HOPE.
  • Ambassador Andrew Young, Bernice King, Dr. Helene Gayle (President of Spelman College), and Dr. George French (President of Clark Atlanta University) have already committed to join the council, with more to be announced soon.
  • Open AI has also committed to its first-ever grant to a community-based organization with $500,000 to help further build out Operation HOPE’s financial coaching and support services.
  • A full replay of the conversation between Mr. Bryant and Mr. Altman at the Hope Global Forums can be viewed here .

Coresight Research Announces NextGen Commerce Conference Series

Retrieved on: 
Friday, December 1, 2023

Coresight Research is hosting a groundbreaking NextGen Commerce Conference on December 5, 2023, offering a unique platform for an in-depth exploration of the future of AI in the retail sector.

Key Points: 
  • Coresight Research is hosting a groundbreaking NextGen Commerce Conference on December 5, 2023, offering a unique platform for an in-depth exploration of the future of AI in the retail sector.
  • “AI is one of, if not the most, disruptive technologies to be introduced since the internet was created,” said Coresight Research CEO Deborah Weinswig.
  • The “Next Gen Commerce: Revolutionizing Retail With Artificial Intelligence” conference is a closed-door event.
  • For more information about sponsorship opportunities or to register to attend the “Next Gen Commerce: Revolutionizing Retail With Artificial Intelligence” conference, visit: https://coresight.com/events/next-gen-commerce-revolutionizing-retail-wi... .

QATAR FOUNDATION'S WISE SEEKS TO REVOLUTIONIZE EDUCATION IN THE AI AGE

Retrieved on: 
Tuesday, October 10, 2023

DOHA, Qatar, Oct. 10, 2023 /PRNewswire/ -- Recognizing the transformative power of AI and how it is becoming an integral part of our daily lives, the WISE Summit 2023 will bring together educators from diverse backgrounds, including tech innovators, AI ethicists, policymakers, and students, to create dialogues on how AI will shape the future of education.

Key Points: 
  • Themed "Creative Fluency: Human Flourishing in the Age of AI," the 11th edition of the summit held by WISE – Qatar Foundation's (QF) global platform for innovation in education – will take place from 28-29 November, 2023.
  • Stavros N. Yiannouka, CEO of WISE, said: "At a time when AI is at the forefront of the global conversation, it is imperative that we consider its role in the future of education.
  • WISE 11 provides a platform for policymakers, educators and innovators to collaboratively envision how AI can enhance the learning experience and empower individuals.
  • WISE 11 aspires to provide inspiration and thought leadership, providing a roadmap for a future where teachers and learners can leverage AI to create transformative learning experiences."