2022.03.01 陳昭潔報告 – Algorithmic Perfumery

Algorithmic Perfumery– ARS Electronica 2019, Interactive Art+

→ Prix Archive Page

→ PPT

Authors – Frederik Duerinck

Abstract –In Algorithmic Perfumery, the world of scent is explored by using the visitor’s input to train the creative capabilities of an automated system. Custom scents are created by a machine learning algorithm based on the unique data we feed it. The outcome is a unique scent generated and compounded on- site. By participating in the experience, visitors contribute to the on-going research to improve the system and reinvent the future of perfumery. Generative perfume design is the emergence of the not too distant future. Algorithmic Perfumery not only ignites the senses, it also allows participants to walk away with a tangible and usable memory of the work. Individuals may complete a personality test lasting about 15 minutes, composed of standard questions and a few more focused on scent preference. After the participants’ answers are compiled, a code is generated. You proceed to a contraption lined with tubes of concentrates, type in your code, and the machine proceeds to mix the concentrates in amounts based on the data provided. And at the end of the assembly line, a small sample vial of your individually crafted scent awaits you. You may then review your feelings about the scent, and in this way the A.I. learns and refines its scent crafting abilities. An inspiringly unique approach to a seldom represented creative process, Algorithmic Perfumery is indicative of the cohesive future between human ability and technological potential.

發表於 110下學期 | 在〈2022.03.01 陳昭潔報告 – Algorithmic Perfumery〉中留言功能已關閉

2022.03.01 陳思豪報告: The Deep Listener

Topic: The Deep Listener, PRIX ARS ELECTRONICA, THE 2021 WINNERS

Category: Computer Animation (CA)

Authors: Jakob Kudsk Steensen (DK)

Abstract: The Deep Listener (2019) is an audio-visual ecological expedition through Kensington Gardens and Hyde Park, the area surrounding the Serpentine Galleries. Designed as an augmented reality and spatial audio work downloadable as an app for mobile devices, it is both a site-specific public artwork and a digital archive of species that live within the park. It pushes the utility of augmented reality and technological tools to transform our spatial understanding of the natural world. The commission expands upon Kudsk Steensen’s practice of merging the organic, ecological, and technological in the building of complex worlds in order to tell stories about our current environmental reality.

Keywords: Deep Listener, Serpentine Galleries, Animation, Interaction Device, Augmented Reality, Slow Media

→ Class Presentation

Reference

發表於 110下學期 | 在〈2022.03.01 陳思豪報告: The Deep Listener〉中留言功能已關閉

110 學年下學期課程介紹

科技藝術書報討論

任課教師:許素朱 教授(+其他老師)
<xn.techart@gmail.com>
課程網址 : http://www.fbilab.org/nthu/aet/seminar
上課時間 : 研究所 週(二) 18:30AM-20:30PM
上課地點:校本部 綜二603教室 (跨院碩辦公室旁)

Course Description(課程目標)
本課程帶領學生①了解國際科技藝術創作與研究的最新趨勢,從該領域最重要之學術會議期刊藝術展覽活動,選讀藝術論文及作品探討報告。課程由授課老師指定閱讀論文或科技藝術作品清單給學生選讀,從報告與討論交流中,讓學生獲得科技藝術領域的作品創作、技術研發能力。②課程中會擇期邀多方領域之師資一同參與,給予跨領域面向的講演或討論交流。③課程也會授予論文撰寫及投稿的關鍵能力。 (More …

發表於 110下學期 | 在〈110 學年下學期課程介紹〉中留言功能已關閉

2022.01.11 王聖銘 Report: MACHINE AUGURIES


Report: Link

Topic: MACHINE AUGURIES

Author: Dr. Alexandra Daisy Ginsberg, Johanna Just, Ness Lafoy, Ana Maria Nicolaescu

Reference: ARS Electronica Festival 2020 (Interactive Art)

Abstract:

Before sunrise, a redstart begins his solo with a warbling call. Other birds respond, together creating the dawn chorus: a back-and-forth that peaks thirty minutes before and after the sun emerges in the spring and early summer, as birds defend their territory and call for mates. Light and sound pollution from our 24-hour urban lifestyle affects birds, who are singing earlier, louder, for longer, or at a higher pitch. But only those species that adapt survive. Machine Auguries questions how the city might sound with changing, homogenizing, or diminishing bird populations.

In the multi-channel sound installation, a natural dawn chorus is taken over by artificial birds, their calls generated using machine learning. Solo recordings of chiffchaffs, great tits, redstarts, robins, thrushes, and entire dawn choruses were used to train two neural networks (a Generative Adversarial Network, or GAN), pitted against each other to sing. Reflecting on how birds develop their song from each other, a call and response of real and artificial birds spatializes the evolution of a new language. Samples taken from each stage (epoch) in the GAN’s training reveal the artificial birds’ growing lifelikeness.

The composition follows the arc of a dawn chorus, compressed into ten minutes. The listener experiences the sound of a fictional urban parkland, entering in the dim silvery light of pre-dawn. We start with a solo from a lone “natural” redstart. In response, from across the room, we hear an artificial redstart sing back, sampled from an early epoch. A “natural” robin joins the chorus, with a call and response set up between natural and artificial birds. The chorus rises as other species enter, reaching a crescendo five minutes in. As the decline starts and the room illuminates to a warm yellow, we realize that the artificial birds, which have gained sophistication in their song, are dominating.

發表於 110上學期 | 在〈2022.01.11 王聖銘 Report: MACHINE AUGURIES〉中留言功能已關閉

2022.01.11 田子平報告


SIGGRAPH 2021

Topic: Common Datum

Author: Tobias Klein, Jane Prophet

ppt:https://docs.google.com/presentation/d/1eqoim5uIjCwtd-o3YjbkrfZ2MsCRF_OpiAp23OErySM/edit?usp=sharing

Abstract:

“Common Datum” is an environmentally reactive, hygroscopic sculpture. A series of suspended vessels continuously absorb the humidity in the exhibition — generated through the breath of the audience. Slowly, each 3D-printed condenser accumulates water that drips into a series of glass volumes. Even though all vessels are of individual shapes, locally absorbing moisture at a different rate, a common datum is created throughout all of them. The work articulates a confluence between traditional and digital craft in the context of environmental, participatory art.

發表於 110上學期 | 在〈2022.01.11 田子平報告〉中留言功能已關閉

2022.1.11 翁政弘報告: ElectroRing

Topic: ElectroRing: Subtle Pinch and Touch Detection with a Ring [pdf] [ppt] [paper]

Author:Wolf Kienzle∗ & Eric Whitmire∗   FRL Research Redmond, WA, USA

Abstract:  We present ElectroRing, a wearable ring-based input device that

reliably detects both onset and release of a subtle fnger pinch, and more generally, contact of the fngertip with the user’s skin. ElectroRing addresses a common problem in ubiquitous touch interfaces, where subtle touch gestures with little movement or force  are not detected by a wearable camera or IMU. ElectroRing’s active electrical sensing  approach provides a step-function-like change in the raw signal, for both touch and release events, which can be easily detected using only basic signal processing techniques. Notably, ElectroRing requires no second point of instrumentation, but only the ring itself, which sets it apart from existing electrical touch detection methods. We built three demo applications to highlight the efectiveness of our approach when combined with a simple IMU-based 2D tracking system.

 

發表於 110上學期 | 在〈2022.1.11 翁政弘報告: ElectroRing〉中留言功能已關閉

2022.1.11 楊元福報告: ChoreoMaster


SIGGRAPH 2021

Topic: ChoreoMaster : Choreography-Oriented Music-Driven Dance Synthesis
ppt

Author: Kang Chen et al.

Abstract:

Despite strong demand in the game and film industry, automatically synthesizing high-quality dance motions remains a challenging task. In this paper, we present ChoreoMaster, a production-ready music-driven dance motion synthesis system. Given a piece of music, ChoreoMaster can automatically generate a high-quality dance motion sequence to accompany the input music in terms of style, rhythm and structure.To achieve this goal, we introduce a novel choreography-oriented choreomusical embedding framework, which successfully constructs a unified choreomusical embedding space for both style and rhythm relationships between music and dance phrases. The learned choreomusical embedding is then incorporated into a novel choreography-oriented graph-based motion synthesis framework, which can robustly and efficiently generate high-quality dance motions following various choreographic rules. As a production-ready system, ChoreoMaster is sufficiently controllable and comprehensive for users to produce desired results. Experimental results demonstrate that dance motions generated by ChoreoMaster are accepted by professional artists.

Paper:

https://dl.acm.org/doi/abs/10.1145/3450626.3459932

https://netease-gameai.github.io/ChoreoMaster/Paper.pdf

Introduction:

https://blog.siggraph.org/2021/09/how-choreomaster-combines-cutting-edge-ai-and-graphics-technologies.html/

https://netease-gameai.github.io/ChoreoMaster/

發表於 110上學期 | 在〈2022.1.11 楊元福報告: ChoreoMaster〉中留言功能已關閉

2021.12.28 黃睿緯報告

PPT: https://docs.google.com/presentation/d/1H8qDtTGeQOsDFDornnaXGuTo3aOVEhtj/edit?usp=sharing&ouid=103926017209371204990&rtpof=true&sd=true

Respire: Virtual Reality Art with Musical Agent Guided by Respiratory Interaction [Leonardo Music Jornal]

Website Link: https://kivanctatar.com/Respire

Video:

發表於 110上學期 | 在〈2021.12.28 黃睿緯報告〉中留言功能已關閉

2021.12.28 古士宏Leonardo Journal Report:Stowaway City


Report: Link

Topic: Stowaway City: An Immersive Audio Experience for Multiple Tracked Listeners in a Hybrid Listening Environment

Author: Michael McKnight

Original Article:

https://pureadmin.qub.ac.uk/ws/portalfiles/portal/241455682/McKnight_DEV.pdf

Abstract:

Stowaway City is an immersive audio experience that combines electroacoustic composition and storytelling with extended reality. The piece was designed to accommodate multiple listeners in a shared auditory virtual environment. Each listener, based on their tracked position and rotation in space, wirelessly receives an individual binaurally decoded sonic perspective via open-back headphones. The sounds and unfolding narrative are mapped to physical locations in the performance area, which are only revealed through exploration and physical movement. Spatial audio is simultaneously presented to all listeners via a spherical loudspeaker array that supplements the headphone audio, thus forming a hybrid listening environment. The work is presented as a conceptual and technical design paradigm for creative sonic application of the technology in this medium. The author outlines a set of strategies that were used to realize the composition and technical affordances of the system.

發表於 110上學期 | 在〈2021.12.28 古士宏Leonardo Journal Report:Stowaway City〉中留言功能已關閉

2021.12.28 王聖銘 Report: First Impression: AI Understands Personality


Report: Link

Topic: First Impression: AI Understands Personality

Author: Xiaohui Wang, Xia Liang, Miao Lu, Jingyan Qin

Reference: ACM Multimedia 2020

Abstract:

When you first encounter a person, a mental image of that person is formed. First impression, an interactive art, is proposed to let AI understand human personality at first glance. The mental image is demonstrated by Beijing opera facial makeups, which shows the character personality with a combination of realism and symbolism. We build Beijing opera facial makeup dataset and semantic dataset of facial features to establish relationships among real faces, personalities and facial makeups. First impression detects faces, recognizes personality from facial appearance and finds the matching Beijing opera facial makeup. Finally, the morphing process from real face to facial makeup is shown to let users enjoy the process of AI understanding personality.

發表於 110上學期 | 在〈2021.12.28 王聖銘 Report: First Impression: AI Understands Personality〉中留言功能已關閉

2021.12.28 陳麗宇報告 – U!Scientist

U!Scientist: Designing for People-Powered Research in Museums – SIGCHI 2021, Best Paper

U!Scientist Web

Full-text Paper

→ Class Presentation

 

Authors Mmachi God’sglory Obiorah et al.

Key words – Citizen science, museums, interactive tabletop displays

Abstract – Scientists have long sought to engage public audiences in research through citizen science projects. This project engages public audiences in contributing to real research as part of their visit to a museum. We present the design and evaluation of U!Scientist, an interactive multi-person tabletop exhibit based on the online Zooniverse project, Galaxy Zoo. We installed U!Scientist in a planetarium and collected video, computer logs, naturalistic observations, and surveys with visitors. Our findings demonstrate the potential of exhibits to engage new audiences in collaborative scientific discussions as part of people-powered research.

發表於 110上學期 | 在〈2021.12.28 陳麗宇報告 – U!Scientist〉中留言功能已關閉

2021.12.14 陳麗宇報告 – Tracking the Loving Gaze

Tracking the Loving Gaze – Leonardo 2021, Artists’Article

Leonardo journal [MIT press]

Journal

→ Class Presentation

 

Authors – Theopisti Stylianou-Lambert & Omiros Panayides (Cyprus University of Technology, CY)

Key words – Personal photography, Eye tracker, Loving gaze

Abstract – Tracking the Loving Gaze is a futile attempt to follow, map and capture the way cherished personal photographs are viewed. The authors asked 30 survey subjects to use an eye tracker while looking at a preselected photograph that held a special meaning for them. The raw visual data from this process—heat maps, focus maps and scan paths—became the foundation of a body of work that includes darkroom prints, short videos and a limited-edition artist book. Apart from exploring the invisible viewing processes of personal photography, this article introduces the concepts of the detached and the invested viewer as well as the corresponding concepts of the cold and the loving gaze.

發表於 110上學期 | 在〈2021.12.14 陳麗宇報告 – Tracking the Loving Gaze〉中留言功能已關閉

2021.12.14 翁政弘報告

20211214_Dual LearningMM ’21 MultiMedia: October 20–24, 2021, Virtual Event, China

Topic: Dual Learning Music Composition and Dance Choreography [Paper][ppt][pdf]

Author: Shuang Wu et al.

Abstract:
Music and dance have always co-existed as pillars of human activities, contributing immensely to the cultural, social, and entertainment functions in virtually all societies. Notwithstanding the gradual systematization of music and dance into two independent disciplines, their intimate connection is undeniable and one artform often appears incomplete without the other. Recent research works have studied generative models for dance sequences conditioned on music. The dual task of composing music for given dances, however, has been largely overlooked.

In this paper, we propose a novel extension, where we jointly model both tasks in a dual learning approach. To leverage the duality of the two modalities, we introduce an optimal transport objective to align feature embeddings, as well as a cycle consistency loss to foster overall consistency. Experimental results demonstrate that our dual learning framework improves individual task performance, delivering generated music compositions and dance choreographs that are realistic and faithful to the conditioned inputs.

發表於 110上學期 | 在〈2021.12.14 翁政弘報告〉中留言功能已關閉

2021.12.14 楊元福報告


MM ’20: the 28th ACM International Conference on Multimedia

Topic: Anisotropic Stroke Control for Multiple Artists Style Transfer ppt

Author: XuanHong Chen et al.

Abstract:

Though significant progress has been made in artistic style transfer, semantic information is usually difficult to be preserved in a fine-grained locally consistent manner by most existing methods, especially when multiple artists styles are required to transfer within one single model. To circumvent this issue, we propose a Stroke Control Multi-Artist Style Transfer framework. On the one hand, we design an Anisotropic Stroke Module (ASM) which realizes the dynamic adjustment of style-stroke between the non-trivial and the trivial regions. ASM endows the network with the ability of adaptive semantic-consistency among various styles. On the other hand, we present an novel Multi-Scale Projection Discriminator to realize the texture-level conditional generation. In contrast to the single-scale conditional discriminator, our discriminator is able to capture multi-scale texture clue to effectively distinguish a wide range of artistic styles. Extensive experimental results well demonstrate the feasibility and effectiveness of our approach. Our framework can transform a photograph into different artistic style oil painting via only ONE single model. Furthermore, the results are with distinctive artistic style and retain the anisotropic semantic information.

Link:  https://dl.acm.org/doi/abs/10.1145/3394171.3413770

發表於 110上學期 | 在〈2021.12.14 楊元福報告〉中留言功能已關閉

2021.11.30 古士宏報告

主題:《基於視覺之意識遞延科技藝術創作論述》
Vision-based Consciousness Transmit and Extend Techno Art Creation

作者:黃浩旻

摘要:
In the context start with the observation of the relationship between humans and technology. Can consciousness be carried by technology to receive and control information in the real world? To clarify this issue, need to research in the fields of knowledge, such as philosophy, psychology, biology, futurology, etc. This context as the main theme to create, and analyze the experience process to clarify, analyze the meaning and logic of the results of the work.

Researcher has not received art-related training before entering the institute, so he needs to complete a process of clarifying art first. During the study and creation processed, gradually established the researcher understanding of the artistry and creation, and cultivated the habit of exploring the nature of things.

Researcher tried to use the electronic control method which learned from the university as one of the creative methods, and began to try to use the previously familiar power device as a medium, and examine my own experience to develop the main axis of the work. In this research, two works related to the theme will be selected for discussion and research. In the first work, “Lapse” observing the process of communication between two machines that are similar to cellular automata and explore the differences in behavior between machines and biological behaviors, and whether there is substitution between the two. The extension of substitution brings about Impact. In the second piece of ” The PetriDish “, researcher started to shift attention to own opinion of the world, which is like a virtual theory view and theorized this hypothesis into creation. by experience this in interactive installation creation. It does not intentionally produce other issues that can be considered, which is the relationship between consciousness and the body, which extends to reflection on the nature of the world.

關鍵字:Consciousness, Human Augmentation, Interactive Art, Installation Art, Virtual Reality.

報告:LINK

發表於 110上學期 | 在〈2021.11.30 古士宏報告〉中留言功能已關閉