Skip to main content
Skip to main content

WiP - Jasmine Gunkel / AI, pseudointimacy, and why intimacy matters

Close portrait of a young woman's face, smiling lightly.

WiP - Jasmine Gunkel / AI, pseudointimacy, and why intimacy matters

Philosophy Wednesday, September 4, 2024 1:00 pm - 2:00 pm Skinner Building, 1116

Jasmine Gunkel, postdoc at the National Institutes of Health, and soon to be Assistant Professor of Philosophy at Western Ontario University, presents at our Works in Progress meeting, with recent work on "AI, pseudointimacy, and why intimacy matters," abstracted below.


The U.S. Surgeon General has declared that we are in an “epidemic of loneliness and isolation.” A rising number of people are turning to AI as a salve. Though I take it to be obvious that chatbots are an insufficient solution to the loneliness epidemic, I think it’s less obvious why this is. This talk centers around three sets of questions. 1) Is it possible to be intimate with AI, or is it mere ‘pseudointimacy’? What if, anything, turns on this distinction? 2) Is designing AI systems that seem able to be intimate a good thing? 3) How ought we to design and regulate AI in light of the answers to 1 and 2? In exploring these questions, I (loftily) also hope to shed some new light on the value of intimacy and human connection.

 

Add to Calendar 09/04/24 13:00:00 09/04/24 14:00:00 America/New_York WiP - Jasmine Gunkel / AI, pseudointimacy, and why intimacy matters

Jasmine Gunkel, postdoc at the National Institutes of Health, and soon to be Assistant Professor of Philosophy at Western Ontario University, presents at our Works in Progress meeting, with recent work on "AI, pseudointimacy, and why intimacy matters," abstracted below.


The U.S. Surgeon General has declared that we are in an “epidemic of loneliness and isolation.” A rising number of people are turning to AI as a salve. Though I take it to be obvious that chatbots are an insufficient solution to the loneliness epidemic, I think it’s less obvious why this is. This talk centers around three sets of questions. 1) Is it possible to be intimate with AI, or is it mere ‘pseudointimacy’? What if, anything, turns on this distinction? 2) Is designing AI systems that seem able to be intimate a good thing? 3) How ought we to design and regulate AI in light of the answers to 1 and 2? In exploring these questions, I (loftily) also hope to shed some new light on the value of intimacy and human connection.

 

Skinner Building false