2025.04.01-周巧其報告-Voices of Climate Change

Voices of Climate Change

Each soul knows the infinite—knows all—but confusedly.
It is like walking on the seashore and hearing the great
noise of the sea.—G. W. Leibniz (1714)

The authors, an artist and a geophysicist, present three different approaches to art-science projects, depicting hybrid models of interdisciplinarity, particularly via sound art. They first cooperate to create an art installation using sonified seismic data collected in Antarctica, moving then to vibrational data at a seismic deployment in the Jornada desert, New Mexico, envisioning a site-specific listening approach that would effectively merge art and science. The authors propose new models of collaboration in ever-more-urgent global responses to the climate crisis, revealing what we might call the “voices” of climate change.

PDF for presentation

發表於 112下學期 | 在〈2025.04.01-周巧其報告-Voices of Climate Change〉中留言功能已關閉

2025.04.01-孫以臻報告-The Artistic Status of Bio-art

The Artistic Status of Bio-art (download)

Rupkatha Journal on Interdisciplinary Studies in Humanities (ISSN 0975-2935)
Vol. 13, No. 1, January-March, 2021. 1-13

Author: Eleni Gemtou
National and Kapodistrian University of Athens, Greece.

Abstract

This paper aims to define Bio-art by strengthening its artistic status through two distinct approaches. The first is based on the acceptance that the concept of Bio-art includes both the term “art” and the term “bio” that could stand for Biology, Biotechnology, and Bioethics. It is argued that despite its direct connection to scientific research, Bio-art is only partly linked to the methods of the pure science of Biology, while it stands
closer to the technoscience of Biotechnology. However, while bio-artists often use scientific methods and techniques, they eventually focus on bioethical questions. To amplify the artistic status of bio-artworks, we claim that they are kinds of visual “enthymemes”, a term used by Aristotle to define incomplete rhetoric
syllogisms linking all recipients to common questions. Our second approach is developed around Levinson’s intentional-historical theory, showing that Bio-art belongs to the evolutionary narrative of art and artistic intentions. We allege interconnections of distinct features of bio-artworks with artworks of different eras that in the context of a retrospective view are to be understood as having paved the way for the emergence of Bio-
art.

Key words: Bio-art, Biotechnology, Bioethics, Metaphor-Enthymeme, Levinson’s intentional-historical
theory

PDF for presentation

 

發表於 113下學期 | 在〈2025.04.01-孫以臻報告-The Artistic Status of Bio-art〉中留言功能已關閉

2025.04.01 葉卯陽報告 – When He Feels Cold, He Goes to the Seahorse

Article Title

“When He Feels Cold, He Goes to the Seahorse“
— Blending Generative AI into Multimaterial Storymaking for Family Expressive Arts Therapy

CHI ’24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems

Introduction

This study pioneers the use of generative AI (Midjourney) in family expressive arts therapy, showing how AI-generated visuals—combined with clay, drawing, and storytelling—help children and parents communicate emotions. Through 5-week sessions with 7 families, researchers found AI acts as both a creative amplifier (e.g., turning clay figures into symbolic characters) and therapeutic bridge, revealing family dynamics through metaphors like a child’s “seahorse = comfort.”

Methodology

This study employed a 5-week co-design process with 7 families (18 participants) guided by a therapist. Families used traditional materials (clay, markers) and generative AI (Midjourney) to co-create stories. AI (Midjourney) to co-create stories.

Key findings

  • Children projected emotions via AI-generated characters (e.g., dinosaur as an obstacle).
  • Role-playing with tokens fostered perspective-taking.
  • AI’s Role: Empowerment (lowered creative thresholds), connection (physical-digital fusion).

Conclusion

  • AI + traditional materials enhanced therapeutic storymaking by fostering creativity, connection, and reflection.
  • AI’s unpredictability turned into creative opportunities (e.g., “diamond banana”).
  • Generative AI isn’t just a tool—it’s a co-creator in family healing.

URL

https://doi.org/10.1145/3613904.3642852

2025.04.01 葉卯陽報告 – When He Feels Cold, He Goes to the Seahorse

發表於 113下學期 | 在〈2025.04.01 葉卯陽報告 – When He Feels Cold, He Goes to the Seahorse〉中留言功能已關閉

2025.04.01 吳柏瑤報告 – From I-Ching to AI: Interrogating Digital Divination (ISEA 2024)

From I-Ching to AI: Interrogating Digital Divination

Presentation PDF

A black female scientist sits at an algorithmic divination machine full of buttons, dials and wires

Conference: 29th International Symposium on Electronic Art (ISEA)

Authors: Hugh Davies (RMIT University)

Keywords: Games, Media Art, Artificial Intelligence

Abstract

Divination denotes practices of mediation that aim to reveal hidden knowledge and sketch out speculative futures before they come into being. Often employing creative and playful methods, divinatory speculations wield ominous power, even when inaccurate. Today, this power is becoming con-centrated within neoliberal coordinates following the pro-fessionalization of divination, most markedly through artifi-cial intelligence (AI). Reviewing the literature of past and present divinatory practices to interrogate its methods from games to AI, this paper offers four key contributions: (1) it establishes divination as a media arts practice; (2) it traces transnational histories of this practice; (3) it unpacks the lim-itations and issues arising from AI divination, and (4) it pre-sents strategies and tactics to confront them. Mapping the shifting power-relations and speculative practices of predic-tion, this paper reveals and critiques the unannounced spir-itual mysticism surrounding contemporary AI and its in-creasing embrace within late-capitalist future forecasting.

Reference

發表於 112下學期 | 在〈2025.04.01 吳柏瑤報告 – From I-Ching to AI: Interrogating Digital Divination (ISEA 2024)〉中留言功能已關閉

2025.03.18 周巧其報告 – 新興技術重構生態視野

新興技術重構生態視野
Presentation PDF

REFERENCE

Chang, M., Shen, C., Maheshwari, A., Danielescu, A., & Yao, L. (2022, June). Patterns and opportunities for the design of human-plant interaction. In Proceedings of the 2022 ACM Designing Interactive Systems Conference (pp. 925-948).

Hu, Y., Chou, C., & Kakehi, Y. (2023). Synplant: Cymatics Visualization of Plant-Environment Interaction Based on Plants Biosignals. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 6(2), 1-7.

Hu, Y. Y., Chou, C. C., & Li, C. W. (2021, October). Apercevoir: Bio internet of things interactive system. In Proceedings of the 29th ACM International Conference n Multimedia (pp. 1456-1458).

Hu, Y., Fol, C. R., Chou, C., Griess, V. C., & Kakehi, Y. (2024, May). Immersive Flora: Re-Engaging with the Forest through the Visualisation of Plant-Environment Interactions in Virtual Reality. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-6).

Hu, Y., Fol, C. R., Chou, C., Griess, V. C., & Kakehi, Y. (2024, May). Immersive Flora: Re-Engaging with the Forest through the Visualisation of Plant-Environment Interactions in Virtual Reality. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-6).

發表於 112下學期 | 在〈2025.03.18 周巧其報告 – 新興技術重構生態視野〉中留言功能已關閉

2025.03.18 葉卯陽報告 – The Malleable-Self Experience

The Malleable-Self Experience: Transforming Body Image by Integrating Visual and Whole-body Haptic Stimuli

Audience Award, ACM SIGGRAPH 2024 Emerging Technologies

ABSTRACT

The Malleable-Self Experience comprises the integration of the visual element of virtual reality (VR) with the whole-body haptic sensations of the Synesthesia X1 haptic chair. The goal is to induce a provocative experience that expands one’s understanding of the self by creating a malleable perception of the body image. We explore the effects of visual and whole-body haptic integration on augmenting body image during dynamic transformations of visual representations of the body in VR. We design the plausibility of these perceptual augmentations using a specific sequence of multisensory events: (1) establishing body ownership of a virtual body anchored in the same self-located space as the participant, (2) separating the virtual body to hover above the participant’s physical body, enhanced by accompanying haptic stimuli to increase proprioceptive uncertainty, and (3) transforming the virtual body with integrated visuo-haptic stimuli to sustain perceptual congruency.

KEYWORDS

Malleable-Self Experience,  XR(Extended Reality), Synesthesia, Body Ownership Illusions(BOI)

AUTHOR

  • Tanner Person, Keio University Graduate School of Media Design
  • Nobuhisa Hanamitsu, Enhance Experience Inc.Keio University Graduate School of Media Design
  • Danny Hynds, Keio University Graduate School of Media Design
  • Sohei Wakisaka, Keio University Graduate School of Media Design
  • Kota Isobe, Enhance Experience Inc.
  • Leonard Mochizuki, Enhance Experience Inc.
  • Tetsuya Mizuguchi, Enhance Experience Inc.Keio University Graduate School of Media Design
  • Kouta Minamizawa, Professor, Keio University Graduate School of Media Design

REFERENCE

ACM SIGGRAPH 2024 Emerging Technologies

Keio Media Design (KMD)

Enhance Experience Inc.

Synesthesia lab

https://synesthesialab.com/

Presentation file.

發表於 113下學期 | 在〈2025.03.18 葉卯陽報告 – The Malleable-Self Experience〉中留言功能已關閉

2025.03.18 孫以臻報告 – Material Texture Design

Material Texture Design: Texture Representation System

Utilizing Pseudo-Attraction Force Sensation

SIGGRAPH 2023 Emerging Technologies

ABSTRACT
We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.

KEYWORDS
texture design, haptic display, elastic interface

REFERENCE
https://www.youtube.com/watch?v=KqxmShoDhjIhttps://dl.acm.org/doi/epdf/10.1145/3588037

Pdf. for presentation

發表於 113下學期, conference | 在〈2025.03.18 孫以臻報告 – Material Texture Design〉中留言功能已關閉

2025.03.18 吳柏瑤報告 – Love in Action: Gamifying Public Video Cameras for Fostering Social Relationships in Real World (EAI ArtsIT 2024)

Love in Action: Gamifying Public Video Cameras for Fostering Social Relationships in Real World

Presentation PDF

Refer to caption

Abstract

In this paper, we create “Love in Action” (LIA), a body language-based social game utilizing video cameras installed in public spaces to enhance social relationships in real-world. In the game, participants assume dual roles, i.e., requesters, who issue social requests, and performers, who respond social requests through performing specified body languages. To mediate the communication between participants, we build an AI-enhanced video analysis system incorporating multiple visual analysis modules like person detection, attribute recognition, and action recognition, to assess the performer’s body language quality. A two-week field study involving 27 participants shows significant improvements in their social friendships, as indicated by Self-reported questionnaires. Moreover, user experiences are investigated to highlight the potential of public video cameras as a novel communication medium for socializing in public spaces.

Keywords: Location-based games, Social interactions, Public video cameras

 

 

發表於 113下學期 | 在〈2025.03.18 吳柏瑤報告 – Love in Action: Gamifying Public Video Cameras for Fostering Social Relationships in Real World (EAI ArtsIT 2024)〉中留言功能已關閉

2025.03.03 葉卯陽報告 – I am Feeling Lucky

Art Title: I am Feeling Lucky

         The Prix Ars Electronica | Award of Distinction 2024

Author: Timothy Thomasson

→ Original Link:

→ Artwork Website

→ Class Presentation PPT

Abstract: 

I’m Feeling Lucky is a real-time computer-generated animation that questions relationships to image, geography, virtual space, historical media technology, and mass data collection systems. The work features a 3D virtual landscape that is both historically and geographically ambiguous, generated in real-time using game engine technology. This virtual landscape is then populated with thousands of figures sourced from the vast pool of 360-degree image data collected by Google Street View. These figures are processed through a deep neural network, so they become three-dimensional models in the virtual space, each frozen in their captured pose. The work interrogates mass image collection systems, as many of these individuals may not have been aware that their photo was taken by Google, let alone anticipate being placed in this new, strange setting. Many thousands of figures sourced from all over the world are randomly selected to inhabit the endless landscape together.

 The work takes into consideration the panorama paintings of the 19th century as objects of historical, cultural, and perceptual significance, and situates them within contemporary media contexts. Panoramas are rotunda structures in which large 360 degree paintings depict sublime natural landscapes, battle scenes, religious events, or large cityscapes, characterized by their lack of framed boundaries and the inability to be viewed in their entirety with a single gaze. These panorama structures are theorized as part of the lineage of immersive media technologies and can be analyzed as proto-cinematic/virtual reality forms.

With I’m Feeling Lucky, the virtual environment is generated and populated procedurally, so the panoramic image becomes infinite as the virtual camera slowly pans across the landscape endlessly, portraying the stillness of painting at odds with the expectation of fast, high-speed movement and technical progression of digital imagery.

Jurystatement:

In I’m Feeling Lucky by Canadian artist Timothy Thomasson, a historically and geographically ambiguous 3D virtual landscape is generated in real-time with game engine technology and populated with figures from Google Street View. Processed by a deep neural network, thousands of anonymous figures taken from all over the world are randomly selected to inhabit the landscape. The work is based on 19th century panoramas: all-encompassing circular paintings that featured spectacular natural landscapes or battle scenes that completely surrounded the viewer. The panoramas’ immersive scale aimed to condition and mediate perception, thus linking the spectacle and scale of the time with the contemporary scales of imaging and data collection undertaken by Google. Images in the work are continually produced in run time as a virtual camera rotates around the space endlessly and at times almost imperceptibly, thus creating a disjunction between the stillness of landscape painting and the expectation of high frame rate digital images. The jury was impressed with how I’m Feeling Lucky subtly links histories of geography and historical media technology with current issues around mass data collection.

發表於 113下學期 | 在〈2025.03.03 葉卯陽報告 – I am Feeling Lucky〉中留言功能已關閉

2025.03.04 周巧其報告 – 社會介入與諸眾

社會介入與諸眾

Presentation PDF
Prix Ars Electronica | 2020-2022 | Golden Nica

Be Water by Hong Kongers
https://archive.aec.at/prix/254025/

Bi0film.net: Resist like bacteria
https://junghsu.com/Bi0film-net

Forensic Architecture’s Cloud Studies
https://forensic-architecture.org/

發表於 113下學期 | 在〈2025.03.04 周巧其報告 – 社會介入與諸眾〉中留言功能已關閉

2025.03.04 孫以臻報告 – Nosukaay

Nosukaay / Diane Cescutti (FR)

Interactive Art + ARS ELECTRONICA Golden Nica 2024

Nosukaay
Artist: Diane Cescutti (FR)

The loom could be envisioned as a programmable machine that encodes knowledge into fabric, serving as a means of preserving and transmitting culture; while the computer processes data, the loom preserves stories and traditions. ‘Nosukaay’ means computer in Wolof, a language spoken by people in much of West Africa; the installation Nosukaay merges textile hapticity with the digital space to produce a hybrid that expands the notion of interactivity. It is based on an modified Manjacque loom, in which the loom’s frames are replaced by two screens that introduce a video game in which the users interact with the “wisdom of the system” through a deity. Its tactile interface is made of Manjak loincloth, woven by the artist Edimar Rosa in Dakar. If the player makes a choice that does not respect the machine deity and hence the importance of the knowledge transmitted, the user gets ejected from the game and sent back to the beginning. Nosukaay as a textile-computer hybrid allows us to rethink the concept of the “computer” through a rich tapestry of shared understanding that interweaves craft with computational practices.

ref.:
https://archive.aec.at/prix/290626/
https://www.africandigitalart.com/nosukaay-weaving-the-future-with-tradition-and-technology/
https://dianecescutti.com/works/nosukaay/

pdf. for presentation

發表於 113下學期 | 在〈2025.03.04 孫以臻報告 – Nosukaay〉中留言功能已關閉

2025.03.04 吳柏瑤報告 – Cold Call: Time Theft as Avoided Emissions

Presentation PDF

Cold Call: Time Theft as Avoided Emissions
Sam Lavigne and Tega Brain (INT)

Prix Ars Electronica | The 2024 Winners | Interactive Art

Abstract

Cold Call: Time Theft as Avoided Emissions is an unconventional carbon offsetting scheme that draws on strategies of worker sabotage and applies them in the context of high emission companies in the fossil fuel industry. Time theft is a strategy to deliberately slow productivity, where workers waste time and are therefore paid for periods of idleness. For example, fake sick days, sleeping on the job, extended lunch breaks, or engaging in non-work-related activities like social media or unrelated phone calls. In extractive industries where productivity remains firmly tethered to carbon emissions, sabotage is an effective strategy for emissions reductions.  

Cold Call is an installation that takes the form of a call center. Audiences are connected by telephone to executives in the fossil fuel industry and instructed to keep them on the phone as long as possible. The cumulative time stolen from these executives is then quantified as carbon credits, using an innovative new offsetting methodology. The project is powered by custom call center software that allows participants to make calls, learn about who they are calling, access call scripts and conversation ideas, and listen to recordings of calls that have already been made. A leader board tracks the total number and length of calls. To date, the longest call has stretched for over 39 minutes.

 

發表於 113下學期 | 在〈2025.03.04 吳柏瑤報告 – Cold Call: Time Theft as Avoided Emissions〉中留言功能已關閉

2023.5.23洪寶惜報告-GANksy aims to produce images that bear resemblance to works by the UK’s most famous street artist

The  ART  Newspaper :
An AI bot has figured out how to draw like Banksy. And it’s uncanny !

報導來源:An AI bot has figured out how to draw like Banksy. And it’s uncanny (theartnewspaper.com)

報告PPT:[PPT]

Abstract

To create these images, Round has used a type of computerised machine learning framework known as a GAN (generative adversarial network). This specific GAN was trained for five days using a portfolio of hundreds of images of (potentially) Banksy’s work, until it was able to produce an image that bears a superficial likeness to the originals.

發表於 111下學期 | 在〈2023.5.23洪寶惜報告-GANksy aims to produce images that bear resemblance to works by the UK’s most famous street artist〉中留言功能已關閉

2023.5.23巫思萱報告- Co-Writing with Opinionated Language Models Afects Users’ Views

論文名稱:Co-Writing with Opinionated Language Models Affects Users’ Views

論文作者:Maurice Jakesch, et al.

論文來源:CHI’23 https://dl.acm.org/doi/10.1145/3544548.3581196

報告PPT: [PDF]

ABSTRACT

If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write – and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologi

發表於 111下學期 | 在〈2023.5.23巫思萱報告- Co-Writing with Opinionated Language Models Afects Users’ Views〉中留言功能已關閉

2023.05.23 李艷琳報告 – CLOUD STUDIES

CLOUD STUDIES

PRIX ARS ELECTRONICA 2021 – Artificial Intelligence & Life Art – Golden Nica

 

Authors:

Forensic Architecture (FA)

→ Original Link:

→ Artwork Video

→ Class Presentation PPT

 

Abstract: 

Civil society rarely has privileged access to classified information, making the information that is available from ‘open sources’ crucial in identifying and analyzing human rights violations by states and militaries. The wealth of newly-available data—images and videos pulled from the open source internet—around which critical new methodologies are being built, demands new forms of image literacy, an ‘investigative aesthetics,’ to read traces of violence in fragmentary data drawn from scenes of conflict and human rights violations. The results of these new methodologies have been significant, and Forensic Architecture (FA) has been among the pioneers in this field, as open source investigation (OSI) has impacted international justice mechanisms, mainstream media, and the work of international human rights NGOs and monitors. The result has been a new era for human rights: what has been called ‘Human Rights 3.0.’

In Forensic Architecture’s work, physical and digital models are more than representations of real-world locations—they function as analytic or operative devices. Models help us to identify the relative location of images, camera positions, actions, and incidents, revealing what parts of the environment are ‘within the frame’ and what remains outside it, thereby giving our investigators a fuller picture of how much is known, or not, about the incident they are studying.

There remain, however, modes of violence that are not easily captured even ‘within the frame.’ Recent decades have seen an increase in airborne violence, typified by the extensive use of chlorine gas and other airborne chemicals against civilian populations in the context of the Syrian civil war. Increasingly, tear gas is used to disperse civilians (often gathered in peaceful protest), while aerial herbicides destroy arable land and displace agricultural communities, and large-scale arson eradicates forests to create industrial plantations, generating vast and damaging smoke clouds. Mobilized by state and corporate powers, toxic clouds affect the air we breathe across different scales and durations, from urban squares to continents, momentary incidents to epochal latencies. These clouds are not only meteorological but political events, subject to debate and contestation. Unlike kinetic violence, where a single line can be drawn between a victim and a ‘smoking gun’, in analyzing airborne violence, causality is hard to demonstrate; in the study of clouds, the ‘contact’ and the ‘trace’ drift apart, carried away by winds or ocean currents, diffused into the atmosphere.  Clouds are transformation embodied, their dynamics elusive, governed by non-linear behavior and multi-causal logics.

One response by FA has been to work with the Department of Mechanical Engineering at Imperial College London (ICL), world leaders in fluid dynamics simulation. Together, FA and ICL have pioneered new methodologies for meeting the complex challenges to civil society posed by airborne violence. The efficacy of such an approach in combatting environmental violence has already been demonstrated—FA’s investigation into herbicidal warfare in Gaza was cited by the UN—and has significant future potential, as state powers are increasingly drawn to those forms of violence and repression that are difficult to trace.

Cloud Studies brings together eight recent investigations by Forensic Architecture, each examining different types of toxic clouds and the capacity of states and corporations to occupy airspace and create unliveable atmospheres. Combining digital modelling, machine learning, fluid dynamics, and mathematical simulation in the context of active casework, it serves as a platform for new human rights research practices directed at those increasingly prevalent modes of ‘cloud-based,’ airborne violence. Following a year marked by environmental catastrophe, a global pandemic, political protest, and an ongoing migrant crisis, Cloud Studies offers a new framework for considering the connectedness of global atmospheres, the porousness of state borders and what Achille Mbembe terms ‘the universal right to breathe.’

發表於 111下學期 | 在〈2023.05.23 李艷琳報告 – CLOUD STUDIES〉中留言功能已關閉