AI

From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam

Retrieved on: 
Thursday, April 25, 2024

Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms.

Key Points: 
  • Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms.
  • Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.
  • Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.

Generative AI meets scams and spam

  • They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.
  • On social media, profiteers have used clickbait articles to drive users to ad-laden websites.
  • This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.
  • But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.

Rate ‘my’ work!

  • Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday.
  • Facebook users often replied to AI-generated images with comments of encouragement and congratulations

Algorithms push AI-generated content

  • We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.
  • It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023.
  • Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.

‘This post was brought to you by AI’

  • Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.
  • Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice.
  • The company has released several announcements about how it plans to deal with AI-generated content.
  • In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.


The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

AI-powered ‘deep medicine’ could transform healthcare in the NHS and reconnect staff with their patients

Retrieved on: 
Thursday, April 25, 2024

He outlines what he calls the deep medicine framework as a comprehensive strategy for the incorporation of AI into different aspects of healthcare.

Key Points: 
  • He outlines what he calls the deep medicine framework as a comprehensive strategy for the incorporation of AI into different aspects of healthcare.
  • The framework of deep medicine is built upon three core pillars: deep phenotyping, deep learning and deep empathy.
  • These pillars are all interconnected and adopting this framework could enhance patient care, support healthcare staff and strengthen the entire NHS system.

Deep phenotyping

  • Deep phenotyping refers to a comprehensive picture of an individual’s health data, across a full lifetime.
  • A deep phenotype goes far beyond the limited data collected during a standard medical appointment or health episode.

Deep learning

  • This is where deep learning – an area of AI that seeks to simulate the decision-making power of the human brain – is so valuable.
  • Deep learning uses an algorithm called a neural network that uses little, mathematical computers, called “neurons”, that are connected to one another to share and learn information.
  • Advances in neural network algorithms, technology, and availability of digital data have enabled neural networks to demonstrate impressive performance.
  • For instance, they have enabled the rapid and accurate analysis of medical images, such as X-rays and MRIs.
  • In addition, AI technology like that behind ChatGPT can process medical literature and patient records to help make complex diagnoses.

Deep empathy

  • This is the pillar of deep medicine known as deep empathy.
  • Healthcare has increasingly become a discipline where the human touch, once its cornerstone, is overshadowed by a relentless pursuit of efficiency.
  • AI solutions can be designed to reduce the administrative burdens for staff, opening up more opportunities for meaningful patient interaction.


Will Jones does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How Israel continues to censor journalists covering the war in Gaza

Retrieved on: 
Thursday, April 25, 2024

That is the hallmark of a dictatorship, not a democracy.” As well as restrictions on media access to Gaza, particular broadcasters face other restrictions.

Key Points: 
  • That is the hallmark of a dictatorship, not a democracy.” As well as restrictions on media access to Gaza, particular broadcasters face other restrictions.
  • At the start of April Israeli prime minister Benjamin Netanyahu had proclaimed he would “act immediately to stop” Qatar-based broadcaster Al Jazeera’s operations inside Israel.
  • Israel’s parliament passed a bill allowing it to close Al Jazeera’s office in Israel, block its website and ban local channels from using its coverage.
  • The CPJ said on April 20 that at least 97 journalists and media workers were among the more than 34,000 people killed since the war began.

Access to Gaza

  • However, journalists’ organisations and the correspondents themselves have been lobbying for access to Gaza for months now.
  • The BBC’s international editor Jeremy Bowen, also speaking in Perugia, confirmed that it had been a really difficult story to cover, principally, “because the main meat of it – which is what’s happening in Gaza, we can’t get close to”.
  • This has given journalists access to the West Bank and enabled coverage of settler violence against the local Palestinian population, but not to Gaza.
  • CNN’s Clarissa Ward was the first foreign journalist who made it into Gaza without the army, and she did this by accompanying an aid convoy supported by the United Arab Emirates in December 2023.

Israeli media coverage

  • Within Israel, the media are mostly publishing the IDF version of events unchallenged.
  • According to Israeli journalist and activist Anat Saragusti: “Hebrew-speaking Israelis watching television news are not exposed at all to what’s going on in Gaza.
  • In the same article, cultural commentator and academic David Gurevitz claimed the numbers of Palestinians killed remains an abstract concept for many Israelis: “The Israeli audience isn’t capable of accommodating two kinds of pain together, seeing and identifying with the human victim of the other side as such, and the media follow suit.” This argument was backed up this month by Israeli journalist Yossi Klein who wrote: “The most taboo number in Israel is 34,000.


Professor Colleen Murrell receives funding from Ireland's regulator Coimisiún na Meán to research and write the annual Reuters Digital News Report Ireland.

FPF Develops Checklist & Guide to Help Schools Vet AI Tools for Legal Compliance

Retrieved on: 
Thursday, April 25, 2024

FPF Develops Checklist & Guide to Help Schools Vet AI Tools for Legal Compliance

Key Points: 
  • FPF Develops Checklist & Guide to Help Schools Vet AI Tools for Legal Compliance
    FPF’s Youth and Education team has developed a checklist and accompanying policy brief to help schools vet generative AI tools for compliance with student privacy laws.
  • Vetting Generative AI Tools for Use in Schools is a crucial resource as the use of generative AI tools continues to increase in educational settings.
  • With these resources, FPF aims to provide much-needed clarity and guidance to educational institutions grappling with these issues.
  • Check out the LinkedIn Live with CEO Jules Polonetsky and Youth & Education Director David Sallay about the Checklist and Policy Brief.

The use of AI in war games could change military strategy

Retrieved on: 
Tuesday, April 23, 2024

The rise of commercially viable generative artificial intelligence (AI) has the potential to transform a vast range of sectors.

Key Points: 
  • The rise of commercially viable generative artificial intelligence (AI) has the potential to transform a vast range of sectors.
  • Generative AI will fundamentally reshape war gaming — analytical games that simulate aspects of warfare at tactical, operational or strategic levels — by allowing senior military and political leaders to pursue better tactical solutions to unexpected crises, solve more complex logistical and operational challenges and deepen their strategic thinking.

The art of war gaming

  • From its inception, war gaming has been intended to offer realistic training to commanders that could otherwise only be gained through real-world experience.
  • War gaming also offers a way to test operational plans, allowing leaders to gain experience planning large-scale operations and work through complex logistical challenges.
  • Lastly, war games provide the foundation for a common strategic culture within a country’s military and national security institutions.

Generative AI

  • As with other strategy games like chess, Risk and Go, generative AI will be capable of challenging commanders’ handling of battlefield tactics.
  • Advances in AI could allow military leaders to gain additional competencies in handling sophisticated military AI and receive tactical advice from a broader range of perspectives.
  • Lastly, generative AI will allow war games to incorporate more strategy, providing invaluable insights and experience to both military and political leaders.

Preparing for uncertainty

  • AI’s capacity to introduce new developments into game play, including through its faulty assumptions, will force commanders to prepare for uncertainty and the “fog of war,” an increasingly necessary skill in the complex environment of contemporary combat.
  • Read more:
    Robots can outwit us on the virtual battlefield, so let's not put them in charge of the real thing

Military science revolution

  • The rise of generative AI and its contribution to war gaming will likely prompt yet another revolution in the field of military science.
  • These games will improve the realism of training exercises and prepare leaders for the future of conflict, solve complex logistical challenges and spark new innovations in overarching military strategy.


John Long Burnham does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Christine Lagarde: Unlocking the power of ideas

Retrieved on: 
Tuesday, April 23, 2024

Since 2022 rising housing costs have, on average, largely been offset by growth in household income, leading to stable housing cost to household income ratios.

Key Points: 
  • Since 2022 rising housing costs have, on average, largely been offset by growth in household income, leading to stable housing cost to household income ratios.
  • The housing cost burden has, however, increased slightly for both renter and mortgage households at the upper end of the income distribution.

China’s Interim Measures for the Management of Generative AI Services: A Comparison Between the Final and Draft Versions of the Text

Retrieved on: 
Tuesday, April 23, 2024

Authors: Yirong Sun and Jingxian Zeng Edited by Josh Lee Kok Thong (FPF) and Sakshi Shivhare (FPF) The following is a guest post to the FPF blog by Yirong Sun, research fellow at the New York University School of Law Guarini Institute for Global Legal Studies at NYU School of Law: Global Law & Tech [?]

Key Points: 


Authors: Yirong Sun and Jingxian Zeng Edited by Josh Lee Kok Thong (FPF) and Sakshi Shivhare (FPF) The following is a guest post to the FPF blog by Yirong Sun, research fellow at the New York University School of Law Guarini Institute for Global Legal Studies at NYU School of Law: Global Law & Tech [?]

Are tomorrow’s engineers ready to face AI’s ethical challenges?

Retrieved on: 
Friday, April 19, 2024

A test version of a Roomba vacuum collects images of users in private situations.

Key Points: 
  • A test version of a Roomba vacuum collects images of users in private situations.
  • The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner.
  • What’s more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

  • We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.
  • When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.
  • “Do YOU know who I’m supposed to go to?” Another was troubled by the lack of training: “I [would be] dealing with that with no experience.


Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do receive. Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical decision-making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off’

  • A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented.
  • Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.
  • [Misusage] issues are not their concern.” One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges.
  • Following them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

  • When engineers do receive ethics training as part of their degree, it seems to work.
  • Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers.
  • Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work.
  • Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.


Elana Goldenkoff receives funding from National Science Foundation and Schmidt Futures. Erin A. Cech receives funding from the National Science Foundation.

TikTok fears point to larger problem: Poor media literacy in the social media age

Retrieved on: 
Friday, April 19, 2024

The U.S. government moved closer to banning the video social media app TikTok after the House of Representatives attached the measure to an emergency spending bill on Apr.

Key Points: 
  • The U.S. government moved closer to banning the video social media app TikTok after the House of Representatives attached the measure to an emergency spending bill on Apr.
  • The move could improve the bill’s chances in the Senate, and President Joe Biden has indicated that he will sign the bill if it reaches his desk.
  • The bill would force ByteDance, the Chinese company that owns TikTok, to either sell its American holdings to a U.S. company or face a ban in the country.
  • For one, ByteDance can be required to assist the Chinese Communist Party in gathering intelligence, according to the Chinese National Intelligence Law.
  • The fact that China, a country that Americans criticize for its authoritarian practices, bans social media platforms is hardly a reason for the U.S. to do the same.
  • Here’s why I think the recent move against TikTok misses the larger point: Americans’ sources of information have declined in quality and the problem goes beyond any one social media platform.

The deeper problem

  • But the proposed solution of switching to American ownership of the app ignores an even more fundamental threat.
  • The deeper problem is not that the Chinese government can easily manipulate content on the app.
  • It is, rather, that people think it is OK to get their news from social media in the first place.
  • In other words, the real national security vulnerability is that people have acquiesced to informing themselves through social media.

Media and technology literacy

  • Research suggests that it will only be alleviated by inculcating media and technology literacy habits from an early age.
  • My colleagues and I have just launched a pilot program to boost digital media literacy with the Boston Mayor’s Youth Council.
  • Some of these measures to boost media and technology literacy might not be popular among tech users and tech companies.


The Applied Ethics Center at UMass Boston receives funding from the Institute for Ethics and Emerging Technologies. Nir Eisikovits serves as the data ethics advisor to Hour25AI, a startup dedicated to reducing digital distractions.

Approaches to Address AI-enabled Voice Cloning

Retrieved on: 
Friday, April 19, 2024

Approaches to Address AI-enabled Voice Cloning Today, the FTC announced four winners of the Voice Cloning Challenge, which was launched to address the present and emerging harms of artificial intelligence, or “AI”-enabled voice cloning technologies.

Key Points: 

Approaches to Address AI-enabled Voice Cloning

  • Today, the FTC announced four winners of the Voice Cloning Challenge, which was launched to address the present and emerging harms of artificial intelligence, or “AI”-enabled voice cloning technologies.
  • The FTC received submissions from a wide range of individuals, teams, and organizations.

Leveraging solutions to provide upstream prevention or authentication

  • Prevention or authentication refers to techniques that limit the application and misuse of voice cloning software by unauthorized users.
  • One commonly discussed approach for prevention and authentication is watermarking, which often refers to a “broad array of techniques” for embedding an identifying mark into a piece of media to track its origin to help prevent the misuse of cloned audio clips.
  • [2] Invisible or visible watermarks can be altered or removed, potentially rendering them unhelpful for differentiating between real and synthetic content.

Applying solutions to detect solutions in real-time


Real-time detection or monitoring includes methods to detect cloned voices or the use of voice cloning technology at the time during which a specific event occurs. Studies reveal a spectrum of efficacy for voice cloning detection solutions.[5] The effectiveness of such solutions is especially important when considering the types of AI-enabled voice cloning scams – such as fraudulent extortion scams – that the technology can enable.

Using solutions to evaluate existing content

  • The post-use evaluation of existing content includes methods to check if already-created audio clips such as voice mail messages and audio direct messages contain cloned voices.
  • One potential way to evaluate existing audio clips is to develop algorithms that detect inconsistencies in voice cloned clips.

Looking forward: Preventing and deterring AI-enabled voice cloning scams and fraud

  • While there are many exciting ideas with great potential, there’s still no silver bullet to prevent the harms posed by voice cloning.
  • Further, voice service providers – telephone and VoIP companies – need to continue making progress against illegal calls.
  • In addition, the Commission has recently enacted a new Impersonation Rule, which will give the agency additional tools to deter and halt deceptive voice cloning practices.
  • There is no AI exemption from the laws on the books and the FTC remains committed to protecting consumers from the misuse of “AI”-enabled voice cloning technologies.


[2]https://arxiv.org/pdf/2306.01953.pdf
[3]https://www.technologyreview.com/2023/08/09/1077516/watermarking-ai-trus...
[4]https://www.banking.senate.gov/imo/media/doc/voice_cloning_financial_sca...
[5]https://arxiv.org/pdf/2307.07683.pdf; https://arxiv.org/pdf/2005.13770.pdf
[6]https://arxiv.org/pdf/2308.14970.pdf
[7]https://arxiv.org/pdf/2402.18085v1.pdf
[8] https://arxiv.org/pdf/2307.07683.pdf; https://arxiv.org/pdf/2402.18085v1.pdf