Skip to content

AI

Press Replay on Ethics: How AI Debate Panels Surface Hidden Value-Trade-Offs

TL;DR High-stake policy decisions often involve conflict between values, like fairness versus efficiency, or individual rights versus the common good. The various committees (like hospital ethics boards or policy advisory groups) tasked with resolving these conflicts often work in ways that are hard to scrutinize, their conclusions shaped by the specific people in the room.… Read More »Press Replay on Ethics: How AI Debate Panels Surface Hidden Value-Trade-Offs

Dire Wolves and Deep Prompts: Language Models in Applied Ethics

You might have seen the headlines: Colossal Biosciences claims to have brought back the dire wolf. Except, it’s not quite a direct resurrection. What Colossal actually created are genetically engineered proxies: grey wolves modified to have some dire wolf traits. I wondered if the news might renew interest in the ethics of “de-extinction” and perhaps… Read More »Dire Wolves and Deep Prompts: Language Models in Applied Ethics

Friend AI: Personal Enhancement or Uninvited Company?

written by Christopher Register You can now pre-order a friend—or, a Friend, which is designed to be an AI friend. The small, round device contains AI-powered software and a microphone, and it’s designed to be worn on a lanyard around the neck at virtually any time. The austere product website says of Friend that, “When… Read More »Friend AI: Personal Enhancement or Uninvited Company?

Caution With Chatbots? Generative AI in Healthcare

  • by

Written by MSt in Practical Ethics student Dr Jeremy Gauntlett-Gilbert Human beings, as a species, love to tell stories and to imagine that there are person-like agents behind events. The Ancient Greeks saw the rivers and the winds as personalised deities, placating them if they appeared ‘angry’. Psychologists  in classic 1940s experiments were impressed at… Read More »Caution With Chatbots? Generative AI in Healthcare

Moral AI And How We Get There with Prof Walter Sinnott-Armstrong

  • by

Can we build and use AI ethically? Walter Sinnott-Armstrong discusses how this can be achieved in his new book ‘Moral AI and How We Get There’ co-authored with Jana Schaich Borg & Vincent Conitzer. Edmond Awad talks through the ethical implications for AI use with Walter in this short video. With thanks to the Atlantic… Read More »Moral AI And How We Get There with Prof Walter Sinnott-Armstrong

Would You Survive Brain Twinning?

Imagine the following case: A few years in the future, neuroscience has advanced considerably to the point where it is able to artificially support conscious activity that is just like the conscious activity in a human brain. After diagnosis of an untreatable illness, a patient, C, has transferred (uploaded) his consciousness to the artificial substrate… Read More »Would You Survive Brain Twinning?

(Bio)technologies, human identity, and the Medical Humanities

Introducing two journal special issues and a conference Written by Alberto Giubilini Two special issues of the journals Bioethics and Monash Bioethics Review will be devoted to, respectively, “New (Bio)technology and Human Identity” and “Medical Humanities in the 21st Century” (academic readers, please consider submitting an article). Here I would like to briefly explain why… Read More »(Bio)technologies, human identity, and the Medical Humanities

Cross Post: What’s wrong with lying to a chatbot?

Written by Dominic Wilkinson, Consultant Neonatologist and Professor of Ethics, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Imagine that you are on the waiting list for a non-urgent operation. You were seen in the clinic some months ago, but still don’t have a date for the procedure. It is extremely frustrating, but it seems that you will just have to wait.

However, the hospital surgical team has just got in contact via a chatbot. The chatbot asks some screening questions about whether your symptoms have worsened since you were last seen, and whether they are stopping you from sleeping, working, or doing your everyday activities.

Your symptoms are much the same, but part of you wonders if you should answer yes. After all, perhaps that will get you bumped up the list, or at least able to speak to someone. And anyway, it’s not as if this is a real person.Read More »Cross Post: What’s wrong with lying to a chatbot?

Political Campaigning, Microtargeting, and the Right to Information

Written by Cristina Voinea 

 

2024 is poised to be a challenging year, partly because of the important elections looming on the horizon – from the United States and various European countries to Russia (though, let us admit, surprises there might be few). As more than half of the global population is on social media, much of political communication and campaigning moved online. Enter the realm of online political microtargeting, a game-changer fueled by data and analytics innovations that changed the face of political campaigning.  

Microtargeting, a form of online targeted advertisement, relies on the collection, aggregation, and processing of both online and offline personal data to target individuals with the messages they will respond or react to. In political campaigns, microtargeting on social media platforms is used for delivering personalized political ads, attuned to the interests, beliefs, and concerns of potential voters. The objectives of political microtargeting are diverse, as it can be used to inform and mobilize or to confuse, scare, and demobilize. How does political microtargeting change the landscape of political campaigns? I argue that this practice is detrimental to democratic processes because it restricts voters’ right to information. (Privacy infringements are an additional reason but will not be the focus of this post). 

 Read More »Political Campaigning, Microtargeting, and the Right to Information