2025.03.04 周巧其報告 – 社會介入與諸眾

社會介入與諸眾

Presentation PDF
Prix Ars Electronica | 2020-2022 | Golden Nica

Be Water by Hong Kongers
https://archive.aec.at/prix/254025/

Bi0film.net: Resist like bacteria
https://junghsu.com/Bi0film-net

Forensic Architecture’s Cloud Studies
https://forensic-architecture.org/

發表於 113下學期 | 在〈2025.03.04 周巧其報告 – 社會介入與諸眾〉中留言功能已關閉

2025.03.04 孫以臻報告 – Nosukaay

Nosukaay / Diane Cescutti (FR)

Interactive Art + ARS ELECTRONICA Golden Nica 2024

Nosukaay
Artist: Diane Cescutti (FR)

The loom could be envisioned as a programmable machine that encodes knowledge into fabric, serving as a means of preserving and transmitting culture; while the computer processes data, the loom preserves stories and traditions. ‘Nosukaay’ means computer in Wolof, a language spoken by people in much of West Africa; the installation Nosukaay merges textile hapticity with the digital space to produce a hybrid that expands the notion of interactivity. It is based on an modified Manjacque loom, in which the loom’s frames are replaced by two screens that introduce a video game in which the users interact with the “wisdom of the system” through a deity. Its tactile interface is made of Manjak loincloth, woven by the artist Edimar Rosa in Dakar. If the player makes a choice that does not respect the machine deity and hence the importance of the knowledge transmitted, the user gets ejected from the game and sent back to the beginning. Nosukaay as a textile-computer hybrid allows us to rethink the concept of the “computer” through a rich tapestry of shared understanding that interweaves craft with computational practices.

ref.:
https://archive.aec.at/prix/290626/
https://www.africandigitalart.com/nosukaay-weaving-the-future-with-tradition-and-technology/
https://dianecescutti.com/works/nosukaay/

pdf. for presentation

發表於 113下學期 | 在〈2025.03.04 孫以臻報告 – Nosukaay〉中留言功能已關閉

2025.03.04 吳柏瑤報告 – Cold Call: Time Theft as Avoided Emissions

Cold Call: Time Theft as Avoided Emissions
Sam Lavigne and Tega Brain (INT)

Prix Ars Electronica | The 2024 Winners | Interactive Art

Presentation PDF

Abstract

Cold Call: Time Theft as Avoided Emissions is an unconventional carbon offsetting scheme that draws on strategies of worker sabotage and applies them in the context of high emission companies in the fossil fuel industry. Time theft is a strategy to deliberately slow productivity, where workers waste time and are therefore paid for periods of idleness. For example, fake sick days, sleeping on the job, extended lunch breaks, or engaging in non-work-related activities like social media or unrelated phone calls. In extractive industries where productivity remains firmly tethered to carbon emissions, sabotage is an effective strategy for emissions reductions.  

Cold Call is an installation that takes the form of a call center. Audiences are connected by telephone to executives in the fossil fuel industry and instructed to keep them on the phone as long as possible. The cumulative time stolen from these executives is then quantified as carbon credits, using an innovative new offsetting methodology. The project is powered by custom call center software that allows participants to make calls, learn about who they are calling, access call scripts and conversation ideas, and listen to recordings of calls that have already been made. A leader board tracks the total number and length of calls. To date, the longest call has stretched for over 39 minutes.

 

發表於 113下學期 | 在〈2025.03.04 吳柏瑤報告 – Cold Call: Time Theft as Avoided Emissions〉中留言功能已關閉

2023.5.23洪寶惜報告-GANksy aims to produce images that bear resemblance to works by the UK’s most famous street artist

The  ART  Newspaper :
An AI bot has figured out how to draw like Banksy. And it’s uncanny !

報導來源:An AI bot has figured out how to draw like Banksy. And it’s uncanny (theartnewspaper.com)

報告PPT:[PPT]

Abstract

To create these images, Round has used a type of computerised machine learning framework known as a GAN (generative adversarial network). This specific GAN was trained for five days using a portfolio of hundreds of images of (potentially) Banksy’s work, until it was able to produce an image that bears a superficial likeness to the originals.

發表於 111下學期 | 在〈2023.5.23洪寶惜報告-GANksy aims to produce images that bear resemblance to works by the UK’s most famous street artist〉中留言功能已關閉

2023.5.23巫思萱報告- Co-Writing with Opinionated Language Models Afects Users’ Views

論文名稱:Co-Writing with Opinionated Language Models Affects Users’ Views

論文作者:Maurice Jakesch, et al.

論文來源:CHI’23 https://dl.acm.org/doi/10.1145/3544548.3581196

報告PPT: [PDF]

ABSTRACT

If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write – and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologi

發表於 111下學期 | 在〈2023.5.23巫思萱報告- Co-Writing with Opinionated Language Models Afects Users’ Views〉中留言功能已關閉

2023.05.23 李艷琳報告 – CLOUD STUDIES

CLOUD STUDIES

PRIX ARS ELECTRONICA 2021 – Artificial Intelligence & Life Art – Golden Nica

 

Authors:

Forensic Architecture (FA)

→ Original Link:

→ Artwork Video

→ Class Presentation PPT

 

Abstract: 

Civil society rarely has privileged access to classified information, making the information that is available from ‘open sources’ crucial in identifying and analyzing human rights violations by states and militaries. The wealth of newly-available data—images and videos pulled from the open source internet—around which critical new methodologies are being built, demands new forms of image literacy, an ‘investigative aesthetics,’ to read traces of violence in fragmentary data drawn from scenes of conflict and human rights violations. The results of these new methodologies have been significant, and Forensic Architecture (FA) has been among the pioneers in this field, as open source investigation (OSI) has impacted international justice mechanisms, mainstream media, and the work of international human rights NGOs and monitors. The result has been a new era for human rights: what has been called ‘Human Rights 3.0.’

In Forensic Architecture’s work, physical and digital models are more than representations of real-world locations—they function as analytic or operative devices. Models help us to identify the relative location of images, camera positions, actions, and incidents, revealing what parts of the environment are ‘within the frame’ and what remains outside it, thereby giving our investigators a fuller picture of how much is known, or not, about the incident they are studying.

There remain, however, modes of violence that are not easily captured even ‘within the frame.’ Recent decades have seen an increase in airborne violence, typified by the extensive use of chlorine gas and other airborne chemicals against civilian populations in the context of the Syrian civil war. Increasingly, tear gas is used to disperse civilians (often gathered in peaceful protest), while aerial herbicides destroy arable land and displace agricultural communities, and large-scale arson eradicates forests to create industrial plantations, generating vast and damaging smoke clouds. Mobilized by state and corporate powers, toxic clouds affect the air we breathe across different scales and durations, from urban squares to continents, momentary incidents to epochal latencies. These clouds are not only meteorological but political events, subject to debate and contestation. Unlike kinetic violence, where a single line can be drawn between a victim and a ‘smoking gun’, in analyzing airborne violence, causality is hard to demonstrate; in the study of clouds, the ‘contact’ and the ‘trace’ drift apart, carried away by winds or ocean currents, diffused into the atmosphere.  Clouds are transformation embodied, their dynamics elusive, governed by non-linear behavior and multi-causal logics.

One response by FA has been to work with the Department of Mechanical Engineering at Imperial College London (ICL), world leaders in fluid dynamics simulation. Together, FA and ICL have pioneered new methodologies for meeting the complex challenges to civil society posed by airborne violence. The efficacy of such an approach in combatting environmental violence has already been demonstrated—FA’s investigation into herbicidal warfare in Gaza was cited by the UN—and has significant future potential, as state powers are increasingly drawn to those forms of violence and repression that are difficult to trace.

Cloud Studies brings together eight recent investigations by Forensic Architecture, each examining different types of toxic clouds and the capacity of states and corporations to occupy airspace and create unliveable atmospheres. Combining digital modelling, machine learning, fluid dynamics, and mathematical simulation in the context of active casework, it serves as a platform for new human rights research practices directed at those increasingly prevalent modes of ‘cloud-based,’ airborne violence. Following a year marked by environmental catastrophe, a global pandemic, political protest, and an ongoing migrant crisis, Cloud Studies offers a new framework for considering the connectedness of global atmospheres, the porousness of state borders and what Achille Mbembe terms ‘the universal right to breathe.’

發表於 111下學期 | 在〈2023.05.23 李艷琳報告 – CLOUD STUDIES〉中留言功能已關閉

2023.05.23 劉士達報告 – Tangible Globes for Data Visualisation in Augmented Reality

論文名稱:Tangible Globes for Data Visualisation in Augmented Reality

論文作者:Kadek Ananta Satriadi, et al.

論文來源:CHI’22 https://doi.org/10.1145/3491102.3517715

報告PPT:[PPT] [PDF]

 

Abstract

Head-mounted augmented reality (AR) displays allow for the seamless integration of virtual visualisation with contextual tangible references, such as physical (tangible) globes. We explore the design of immersive geospatial data visualisation with AR and tangible globes. We investigate the “tangible-virtual interplay” of tangible globes with virtual data visualisation, and propose a conceptual approach for designing immersive geospatial globes. We demonstrate a set of use cases, such as augmenting a tangible globe with virtual overlays, using a physical globe as a tangible input device for interacting with virtual globes and maps, and linking an augmented globe to an abstract data visualisation. We gathered qualitative feedback from experts about our use case visualisations, and compiled a summary of key takeaways as well as ideas for envisioned future improvements. The proposed design space, example visualisations and lessons learned aim to guide the design of tangible globes for data visualisation in AR.

 

keywords : immersive analytics, tangible user interface, augmented reality, geographic visualization

發表於 111下學期 | 在〈2023.05.23 劉士達報告 – Tangible Globes for Data Visualisation in Augmented Reality〉中留言功能已關閉

2023.05.09 洪寶惜報告 – 《光。盲》反思科技媒體偏誤之科技藝術創作

論文名稱:《光。盲》反思科技媒體偏誤之科技藝術創作

The “Blinding Light” – A Techno Artwork to Reflect Technology-Mediated Bias

論文作者:張瑜真 Chang, Yu-Chen (2022)

國立成功大學 科技藝術碩士學位學程

Abstract

論文Link

報告PPT

關鍵字: 科技媒體錯覺

發表於 111下學期 | 在〈2023.05.09 洪寶惜報告 – 《光。盲》反思科技媒體偏誤之科技藝術創作〉中留言功能已關閉

2023.05.09 李艷琳報告 – 星叢‧複線‧集合:網路前衛藝術美學語言

星叢‧複線‧集合:網路前衛藝術美學語言

Constellation‧Multiple lines‧Assemblages: Aesthetic language of the avant-garde.net

 

Authors: 林欣怡

國立交通大學 應用藝術研究所 博士論文

 

Abstract:

「星叢‧複線‧集合:網路前衛藝術美學語言」主要以星叢文體、精神姿態、物性導向、數據主體性、網路物性、多重延身概念複寫而成,上述概念同時結合美學語言、哲學概念、藝術作品三面向的視域,生產黏貼於網路體自身(the net itself)的概念集合體(assemblage),一方面映射網路體本質上的多態複數性格,一方面指向網路美學概念的開放性與轉換性,藉此三視域與節點的交互角力,聯結思考網路前衛作品的異質路徑。論文第一個切面「網路前衛藝術美學」,主要以阿多諾美學理論中的「星叢文體」作為論述樣式。同時關注網路空間如何連結至巨型網絡並生產「動能」,以及此動能如何擬造出精神態勢。接續上述星叢、動能、精神姿態與身體視角,導引出網路集體創作所映射而成的數據主體性,同時開展出「觀念作為物」的變異性,形成物件、物性導向展演的網路物性美學語言。最後論述台灣網路藝術創作脈絡,尋找差異與連結點,以此差異連結點接述台灣的網路創作體質。

 

Keywords:

網路前衛藝術、星叢、集合、數據主體性、網路物性

 

→ Original Link:

→ Class Presentation PPT

發表於 111下學期 | 在〈2023.05.09 李艷琳報告 – 星叢‧複線‧集合:網路前衛藝術美學語言〉中留言功能已關閉

2023.05.09 劉士達報告 – Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction

論文名稱:Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction

論文作者:Nakagaki, Ken

論文年份:September 2021. Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning on August 20, 2021, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Media Arts and Sciences.

論文來源:MIT Media Lab – Tangible Media  Group https://dspace.mit.edu/handle/1721.1/142836

報告PPT:[PPT] [PDF]

Abstract

Research on Actuated and Shape-Changing Tangible User Interfaces (TUIs) in the
field of Human Computer Interaction (HCI) has been explored widely to design embodied interactions using digital computation has been explored widely. While advanced technical approaches, such as robotics and material science, have led to many
concrete instances of Actuated TUIs, a single actuated hardware system, in reality,
is inherently limited by its fixed configuration, thus limiting the reconfigurability,
adaptability, and expressibility of its interactions.

In my thesis, I introduce novel hardware augmentation methods, Shells and Stages,
for Actuated TUI hardware to expand and enrich their interactivity and expressibility
for dynamic physical interactions. Shells act as passive mechanical attachments for
Actuated TUIs that can extend, reconfigure and augment the interactivity and functionality of the hardware. Stages are physical platforms that allow Actuated TUIs to
propel on a platform to create novel physical expression based on the duality of front
stage and back stage. These approaches are inspired by theatrical performances,
computational and robotic architecture, biological systems, physical tools and science fiction. While Shells and Stages can individually augment the interactivity and
expressibility of the Actuated TUI system, the combination of the two enhances advanced physical expression based on combined shell-swaping and stage-transitioning.
By introducing these novel modalities of Shells and Stages, the thesis expands and
contributes to a new paradigm of Inter-Material / Device Interaction in the domain
of Actuated TUIs.

The thesis demonstrates the concepts of Shells and Stages based on existing Actuated TUI hardware, including pin-based shape displays and self-propelled swarm user
interfaces. Design and implementation methods are introduced to fabricate mechanical shells with different properties, and to orchestrate a swarm of robots on the stage
with arbitrary configurations. To demonstrate the expanded interactivity and reconfigurability, a variety of interactive applications are presented via prototypes, ranging
from digital data interaction, reconfigurable physical environment, storytelling, and
tangible gaming. Overall, my research introduces a new A-TUI design paradigm that
incorporates the self-actuating hardware (Actuated TUIs) and passively actuated mechanical modules (Shells) together with surrounding physical platforms (Stages). By
doing so, my research envisions the future in which computational technology is coupled seamlessly with our physical environment. This next generation of TUIs, by
interweaving multiple HCI research streams, aims to provide endless possibilities for
reconfigurable tangible and embodied interactions enabled by fully expressive and
functional movements and forms.

發表於 111下學期 | 在〈2023.05.09 劉士達報告 – Shells and Stages for Actuated TUIs: Reconfiguring and Orchestrating Dynamic Physical Interaction〉中留言功能已關閉

2023.05.15 劉士達報告 – Wander: An AI-driven Chatbot to Visit the Future Earth

論文名稱:Wander: An AI-driven Chatbot to Visit the Future Earth

論文作者:Yuqian Sun, Chenhang Cheng, Ying Xu, Yihua Li, Chang Hee Lee, Ali Asadipour

論文來源:ACM MM’22 https://dl.acm.org/doi/10.1145/3503161.3549971

報告PPT:[PPT] [PDF]

Abstract

This artwork presents an intelligent chatbot called Wander. This work used knowledge based story generation to facilitate a narrative AI chatbot on daily communication  platforms, producing interactive fiction with the most accessible natural language input: text messages. On social media platforms such as Discord and WeChat, Wander can generate a science-fiction style travelogue about the future earth, including text, images  and global coordinates (GPS) based on real-world locations (e.g. Paris). The journeys are visualised in real-time on an interactive map that can be updated with participants’ data. Based on Viktor Shklovsky’s defamiliarization technique, we present how an AI agent can become a storyteller through common messages in daily life and lead participants to
see the world from new perspectives. The website of this work is:
https://wander001.com/

keywords : Intelligent Interactive System, Co-creative AI, Chatbot, Metaverse, Gaming

發表於 111下學期 | 在〈2023.05.15 劉士達報告 – Wander: An AI-driven Chatbot to Visit the Future Earth〉中留言功能已關閉

2023.05.09 巫思萱報告-Designing and Deploying Robotic Companions to Improve Human Psychological Wellbeing

Designing and Deploying Robotic Companions to Improve Human Psychological Wellbeing

Author: Sooyeon Jeong

Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning,on June 29, 2022, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Media Arts and Sciences

 

論文Link

報告PPT

 

Abstract:
Globally, more than 264 million people of all ages are affected by depression, which
has become a leading cause of disability. Several interactive technologies for mental
health have been developed to make various therapeutic services more accessible and scalable. However, most are designed to engage users only within therapy and in-
tervention tasks. This thesis presents social robots that deliver interactive positive psychology interventions and build rapport with people over time as helpful compan-
ions to improve psychological wellbeing. Two long-term deployment studies explored and evaluated how these robotic agents could improve people’s psychological wellbeing in real-world contexts. In Study 1, a robotic coach provided seven positive psychology interventions for college students in on-campus dormitory settings and showed significant association with improvements in students’ psychological wellbeing, mood,and motivation to change. In Study 2, we deployed our robots in 80 people’s homes across the U.S. during the COVID-19 pandemic and evaluated the efficacy of a social robot that delivers wellbeing interventions as a peer-like companion rather than an expert coach. The companion-like robot was shown to be the most effective in building a positive therapeutic alliance with people and resulted in enhanced psychological wellbeing, improved readiness for change, and reduced negative affect. We further explored how traits, such as personality and age, influence the intervention outcomes and participants’ engagement with the robot. The two long-term in-the-wild studies offer valuable insights into design challenges and opportunities for companion AI agents that personalize mental health interventions and agent behaviors based on users’ traits and behavioral cues for better mental health outcomes.

發表於 111下學期 | 在〈2023.05.09 巫思萱報告-Designing and Deploying Robotic Companions to Improve Human Psychological Wellbeing〉中留言功能已關閉

2023.04.25 洪寶惜報告 – A Design Framework for Smart Glass Augmented Reality Experiences in Heritage Sites

論文作者:Mariza Dima , Brunel University London,UK

論文來源:

ACM Journals > Journal on Computing and Cultural Heritage 

2022  https://dl.acm.org/doi/10.1145/3490393[PDF]

報告PPT:[PPT]

Abstract

Despite the growing applications of smart glass Augmented Reality (AR) in heritage, there is not a framework that can serve as a base for designing meaningful and educational immersive heritage experiences. This article proposes such a prototype design framework for AR experiences in heritage sites, drawing on literature that connects affective experiences with learning and practically exploring AR as a non-didactic storytelling medium. Smart glass AR is considered here an important technology milestone for creating affective interactions, one that offers visitors/viewers new ways to experience, embody, and have a physical and social interaction with a localized past and learn about it.

發表於 111下學期 | 在〈2023.04.25 洪寶惜報告 – A Design Framework for Smart Glass Augmented Reality Experiences in Heritage Sites〉中留言功能已關閉

2023.04.25 李艷琳報告 – RePrompt:AutomaticPromptEditingtoRefineAI-GenerativeArt TowardsPreciseExpressions

RePrompt:AutomaticPromptEditingtoRefineAI-GenerativeArt TowardsPreciseExpressions

 

 

 

 

 

 

 

 

 

Authors:

Yunlong Wang, Shuyuan Shen, Brian Y Lim

National University of Singapore, Singapore

 

Abstract:

Generative AI models have shown impressive ability to produce images with text prompts, which could benefit creativity in visual art creation and self-expression. However, it is unclear how precisely the generated images express contexts and emotions from the input texts. We explored the emotional expressiveness of AI-generated images and developed RePrompt, an automatic method to refine text prompts toward precise expression of the generated images. Inspired by crowdsourced editing strategies, we curated intuitive text features, such as the number and concreteness of nouns, and trained a proxy model to analyze the feature effects on the AI-generated image. With model explanations of the proxy model, we curated a rubric to adjust text prompts to optimize image generation for precise emotion expression. We conducted simulation and user studies, which showed that RePrompt significantly improves the emotional expressiveness of AI-generated images, especially for negative emotions.

 

Keywords:

Text-to-image generated model, prompt engineering, AI-generated visual art, emotion expression, explainable AI

 

→ Original Link:

→ Author Website:

→ Class Presentation PPT

 

發表於 111下學期 | 在〈2023.04.25 李艷琳報告 – RePrompt:AutomaticPromptEditingtoRefineAI-GenerativeArt TowardsPreciseExpressions〉中留言功能已關閉

2023.04.25 劉士達報告 – LearnIoTVR: An End-to-End Virtual Reality Environment Providing Authentic Learning Experiences for Internet of Things

論文名稱:LearnIoTVR: An End-to-End Virtual Reality Environment
Providing Authentic Learning Experiences for Internet of Things

論文作者:Zhengzhe Zhu, Ziyi Liu, Youyou Zhang, Lijun Zhu, Joey Huang, Ana M Villanueva, Xun Qian, Kylie Peppler, Karthik Ramani

論文來源:ACM  CHI 2023  https://doi.org/10.1145/3544548.3581396 [PDF]

報告PPT:[PPT] [PDF]

ABSTRACT

The rapid growth of Internet-of-Things (IoT) applications has generated interest from many industries and a need for graduates with relevant knowledge. An IoT system is comprised of spatially distributed interactions between humans and various interconnected IoT components. These interactions are contextualized within their ambient environment, thus impeding educators from recreating authentic tasks for hands-on IoT learning. We propose LearnIoTVR, an end-to-end virtual reality (VR) learning environment which helps students to acquire IoT knowledge through immersive design, programming, and exploration of real-world environments empowered by IoT (e.g., a smart house). The students start the learning process by installing virtual IoT components we created in diferent locations inside the VR environment so that the learning will be situated in the same context where the IoT is applied. With our custom-designed 3D block-based language, students can program IoT behaviors directly within VR and get immediate feedback on their programming outcome. In the user study, we evaluated the learning outcomes among students using LearnIoTVR with a pre- and post-test to understand to what extent does engagement in LearnIoTVR lead to gains in learning programming skills and IoT competencies. Additionally, we examined what aspects of LearnIoTVR support usability and learning of programming skills compared to a traditional desktop-based learning environment. The results from these studies were promising. We also acquired insightful user feedback which provides inspiration for further expansions of this system.

KEYWORDS

Virtual Reality, IoT, Block-based Programming, Project-based Learning, Immersive Programming, Embodied Interaction

 

發表於 111下學期 | 在〈2023.04.25 劉士達報告 – LearnIoTVR: An End-to-End Virtual Reality Environment Providing Authentic Learning Experiences for Internet of Things〉中留言功能已關閉