0510 林巖報告:“Leveraging NVIDIA Omniverse for In Situ Visualization”

Title: Leveraging NVIDIA Omniverse for In Situ Visualization

Author: Mathias Hummel(B) and Kees van Kooten  NVIDIA, Santa Clara, USA{mathiash,kvankooten}@nvidia.com

Keywords:

In situ visualization, NVIDIA Omniverse, ParaView, Catalys

Abstract: “Omniverse is NVIDIA’s collaboration platform for 3D production pipelines. ”

Abstract:

For Meta world usage in the the form of technical combination, Omniverse provide one of the capable solutions which could combine multiple resource for visual and simulation engines in packages add in the Omniverse.  The usage for the Omniverse provide the performance and efficiency with the tasks work on this service and work with self or group. Some cloud usage and work environment provide the working models in various direction.

PPT File:

Omniverse

發表於 110下學期 | 在〈0510 林巖報告:“Leveraging NVIDIA Omniverse for In Situ Visualization”〉中留言功能已關閉

2022.4.12 陳昭潔報告: “Making Up 3D Bodies: Artistic and Serendipitous Modeling of Digital Human Figures”

Paper from ACM 2021.

Making Up 3D Bodies: Artistic and Serendipitous Modeling of Digital Human Figures

Proceedings of the ACM on Computer Graphics and Interactive Techniques

Volume 4 Issue 2 July 2021 Article No.: 23 pp 1–9

Abstract:

This paper describes the process of developing a software tool for digital artistic exploration of 3D human figures. Previously available software for modeling mesh-based 3D human figures restricts user output based on normative assumptions about the form that a body might take, particularly in terms of gender, race, and disability status, which are reinforced by ubiquitous use of range-limited sliders mapped to singular high-level design parameters. CreatorCustom, the software prototype created during this research, is designed to foreground an exploratory approach to modeling 3D human bodies, treating the digital body as a sculptural landscape rather than a pre-supposed form for rote technical representation. Building on prior research into serendipity in Human-Computer Interaction and 3D modeling systems for users at various levels of proficiency, among other areas, this research comprises two qualitative studies and investigation of the impact on the first author’s artistic practice. Study 1 uses interviews and practice sessions to explore the practices of six queer artists working with the body and the language, materials, and actions they use in their practice; these then informed the design of the software tool. Study 2 investigates the usability, creativity support, and bodily implications of the software when used by thirteen artists in a workshop. These studies reveal the importance of exploration and unexpectedness in artistic practice, and a desire for experimental digital approaches to the human form.

Keywords: 3D modeling, human figures, digital human bodies, visual artists

Presentation

Paper

發表於 110下學期 | 在〈2022.4.12 陳昭潔報告: “Making Up 3D Bodies: Artistic and Serendipitous Modeling of Digital Human Figures”〉中留言功能已關閉

2022.4.11 林巖報告: “Electrolysis Bubble Display based Art Installations”

Paper from ACM 2021.

Electrolysis Bubble Display-based Art Installations.

In Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’21), February 14–17, 2021, Salzburg, Austria. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3430524.3440632

Abstract:

“Research was conducted on a digital information display using electrolysis bubbles. “

  • The interface displays information as bubbles during the digital signal transition and translates the data as bubbles.
  • The various things during this research review the liquid and the bubbles with the time and activities during the data transition.
  • The data displayed through the devices contain the digitalized information and optimized the bubbles through various materials when generated the bubbles.

KEYWORDS

Bubble Display; Electrolysis; Water; Art; Ephemeral User Interface

PPT File: Electrolysis Bubble Display

發表於 110下學期 | 在〈2022.4.11 林巖報告: “Electrolysis Bubble Display based Art Installations”〉中留言功能已關閉

2022.04.12 陳麗宇報告 – MR Game “The Woods”

“The Woods”: A Mixed-Reality Two-Player Cooperative Game – ACM SIGGRAPH 2021, Art Paper

→ Full-text Paper

→ Class Presentation

 

Authors –Kyoung Swearingen et al.

Key words – Cooperative Game, Mixed Reality, Sonic Experience

Abstract – While loneliness in our real lives is increasingly recognized as having dire physical, mental, and emotional consequences, cooperative games have been shown to build empathy and provide positive social impact. In this paper, the authors present “The Woods,” a local cooperative, mixed-reality game using augmented reality and 4-channel audio spatialization panning that provides players with face-to-face interactions in pursuit of a shared goal. This paper discusses the narrative, mechanical, and sonic components of the game, as well as the game’s development process and the players’ experiences. The goal of our team is to develop a narrative-driven AR game that promotes collaborative problem-solving and engages players in an emergent physical and digital experience.

發表於 110下學期 | 在〈2022.04.12 陳麗宇報告 – MR Game “The Woods”〉中留言功能已關閉

2022.4.12 陳思豪報告: “drop”, An Interactive Art Installation

“drop”: An Interactive Art Installation with Water Drop Projection-Mapping, SIGGRAPH 2021, Art Papers

Authors: Sadam Fujioka

Abstract:

This paper describes an interactive art installation titled “drop.” It is the first artwork using the Waterdrop Projection-Mapping (WPM) system, which animates levitating waterdrops. With this artwork, the anno lab team infuses physical characteristics into computer graphics and materializes them as tangible pixels. WPM consists of a waterdrop generator and an ultra high-speed projector. The team uses an ultra high-speed projector to cast stroboscopic spotlights mapping on waterdrops to create an optical illusion of animating each waterdrop individually. This is a new technique to show computer animation by animating levitating waterdrops. This technique explores a new horizon to create animations with tangible pixels that the viewer can touch physically.

Keyword: Interactive Art Installation, Waterdrop Projection Mapping, High Speed Projector, Stroboscopic Effect, Human Computer Interaction


Class Presentation

Publish Link

發表於 110下學期 | 在〈2022.4.12 陳思豪報告: “drop”, An Interactive Art Installation〉中留言功能已關閉

專題演講:自然之道的現代視野與實踐 (林巖老師)

專題演講:自然之道的現代視野與實踐 (一種傳統調和思維與方法的記述)
演講者:林巖老師
時間:2022年3月29日晚上6:30PM

林巖老師簡介:
1987年開始,經過傳統東方療癒體系知識的學習與沈澱,對傳統民俗療法的領域開始相關的學習。
2005年開始,透過國內外民間傳統療癒認證師資課程內容,做相關的學習。
2016~2019 年間到國際療癒展覽中進行推廣新的療癒產品與系統。使用不同的素材與介質來探討古代與現代的差異性與共同性。
2019年,在越南、中國等地從事 歷史文物、古董、字畫鑑定。
1999年開始,進入科技產業,專注在PC產業與高速訊號領域的產品與研究。

發表於 110下學期 | 在〈專題演講:自然之道的現代視野與實踐 (林巖老師)〉中留言功能已關閉

2022.3.15 陳思豪報告: DronePaint

DronePaint: Swarm Light Painting with DNN-based Gesture Recognition – SIGGRAPH 2021, Emerging Technologies

Authors: Valerii Serpiva , Ekaterina Karmanova , Aleksey Fedoseev , Stepan Perminov , Dzmitry Tsetserukou

Abstract:

We propose a novel human-swarm interaction system, allowing the user to directly control a swarm of drones in a complex environment through trajectory drawing with a hand gesture interface based on the DNN-based gesture recognition.

The developed CV-based system allows the user to control the swarm behavior without additional devices through human gestures and motions in real-time, providing convenient tools to change the swarm’s shape and formation. The two types of interaction were proposed and implemented to adjust the swarm hierarchy: trajectory drawing and free-form trajectory generation control.

Keyword: Human-Drone Interaction, Light Painting, Gesture Recognition, Deep Neural Network

Class Presentation

Publish Link

發表於 110下學期 | 在〈2022.3.15 陳思豪報告: DronePaint〉中留言功能已關閉

2022.03.15 林巖報告: Weighted Walking: Propeller-based On-leg Force Simulation of Walking in Fluid Materials in VR

Introduction

“WEIGHTED WALKING: PROPELLER-BASED ON-LEG FORCE SIMULATION OF WALKING IN FLUID MATERIALS IN VR”

Based on some previous studies about the On-Leg activities in VR, this research provided another mindset to understand the Leg activities through their devices.

The devices used and the Applications for VR to implement these mindsets in advance. The overall solutions also reviewed some end-user considerations for the weight and sensor data usage for the VR application.

EmergyTech_Asia_2021

Publish

發表於 110下學期 | 在〈2022.03.15 林巖報告: Weighted Walking: Propeller-based On-leg Force Simulation of Walking in Fluid Materials in VR〉中留言功能已關閉

2022.3.15 陳昭潔報告 – Frisson Waves: Sharing Frisson to Create Collective Empathetic Experiences for Music Performances

Frisson Waves: Sharing Frisson to Create Collective Empathetic Experiences for Music Performances

– ACM SIGGRAPH ASIA 2021, Emerging Technologies

 

Publication

Presentation

Authors: 

Yan He, George Chernyshov, Dingding Zheng, Jiawen Han, Ragnar Thomsen, Danny Hynds, Yuehui Yang, Yun Suen Pai, Kai Kunze, Kouta Minamizawa

Abstract:

We propose Frisson Waves, a real-time system to detect, trigger and share frisson during music performances. The system consists of a physiological sensing wristband for detecting frisson and a thermo-haptic neckband for inducing frisson. This project aims to improve the connectedness of audience members and performers during music performances by sharing frisson. We present the results of an initial concert workshop and a feasibility study of our prototype.

發表於 110下學期 | 在〈2022.3.15 陳昭潔報告 – Frisson Waves: Sharing Frisson to Create Collective Empathetic Experiences for Music Performances〉中留言功能已關閉

林巖報告:Reverse Pass-Through VR

Abstract:

Reverse-through VR.

“We introduce reverse pass-through VR, wherein a three-dimensional view of the wearer’s eyes is presented to multiple outside viewers in a perspective-correct manner, with a prototype headset containing a world-facing light field display. ”

“A three-dimensional view of the wearer’s eye is presented to multiple outside viewers.”

This prototype of the display with the natural eye usage method connects the real world to the visual world through display devices with the eye image with some social behavior requirements.

Art Website

Class Representation: Emergy_tech2021

 

發表於 110下學期 | 已標籤 , | 在〈林巖報告:Reverse Pass-Through VR〉中留言功能已關閉

2022.03.15 陳麗宇報告 – Gesture Recognition

Recognition of Gestures over Textiles with Acoustic Signatures – ACM SIGGRAPH ASIA 2021, Emerging Technologies

Publication

→ Class Presentation

 

Authors – Pui Chung Wong, Christian Sandor, Alvaro Cassinelli (CityU, HK)

Abstract – A method capable of turning textured surfaces into opportunistic input interfaces is demonstrated, thanks to a machine learning model pre-trained on acoustic signals generated by scratching different fabrics. It does not require intervention on the fabric. It is passive and works well using regular microphones. Preliminary results also show that the system recognizes the manipulation of Velcro straps, zippers, or the taping or scratching of plastic cloth buttons over the air when the microphone is in personal space.

發表於 110下學期 | 在〈2022.03.15 陳麗宇報告 – Gesture Recognition〉中留言功能已關閉

2022.03.01 林巖報告: INFINITELY YOURS

INFINITELY YOURS:  ARS ELECTRONICA,

The  2020 COLDENE NICA

Category: Computer Animation (CA)

Authors -Miwa Matreyek

Abstract: The “INFINITELY YOURS” (2020). It was the video and audio animation combination. The Author used personal shadow in this film to represent the feeling about the climate issues and some consideration about human activities and the environment.

The file and the image enabled the mindset and the picture about nature and humanity. Some activities on earth and the things we used impact the climate. Water, forest, oil, and plastic usage impact the mother earth with different meanings come at the time.

The shadow places a role and feeling about the things which happened in daily life. The Modern city and things had already impacted things a lot. With the role played inside the file, some feelings and mindsets just went through these processes to remind the idea and things in common.

The film used lots of basic elements and combine whole storage in one page. Used shadow to describe things as the language in short. More reliable message transfer from the place to the people in front of this film.

 

https://archive.aec.at/prix/showmode/63129/

-> Class Presentation

->Art Work Website

 

發表於 110下學期 | 在〈2022.03.01 林巖報告: INFINITELY YOURS〉中留言功能已關閉

2022.03.01 陳昭潔報告 – Algorithmic Perfumery

Algorithmic Perfumery– ARS Electronica 2019, Interactive Art+

→ Prix Archive Page

→ PPT

Authors – Frederik Duerinck

Abstract –In Algorithmic Perfumery, the world of scent is explored by using the visitor’s input to train the creative capabilities of an automated system. Custom scents are created by a machine learning algorithm based on the unique data we feed it. The outcome is a unique scent generated and compounded on- site. By participating in the experience, visitors contribute to the on-going research to improve the system and reinvent the future of perfumery. Generative perfume design is the emergence of the not too distant future. Algorithmic Perfumery not only ignites the senses, it also allows participants to walk away with a tangible and usable memory of the work. Individuals may complete a personality test lasting about 15 minutes, composed of standard questions and a few more focused on scent preference. After the participants’ answers are compiled, a code is generated. You proceed to a contraption lined with tubes of concentrates, type in your code, and the machine proceeds to mix the concentrates in amounts based on the data provided. And at the end of the assembly line, a small sample vial of your individually crafted scent awaits you. You may then review your feelings about the scent, and in this way the A.I. learns and refines its scent crafting abilities. An inspiringly unique approach to a seldom represented creative process, Algorithmic Perfumery is indicative of the cohesive future between human ability and technological potential.

發表於 110下學期 | 在〈2022.03.01 陳昭潔報告 – Algorithmic Perfumery〉中留言功能已關閉

2022.03.01 陳思豪報告: The Deep Listener

Topic: The Deep Listener, PRIX ARS ELECTRONICA, THE 2021 WINNERS

Category: Computer Animation (CA)

Authors: Jakob Kudsk Steensen (DK)

Abstract: The Deep Listener (2019) is an audio-visual ecological expedition through Kensington Gardens and Hyde Park, the area surrounding the Serpentine Galleries. Designed as an augmented reality and spatial audio work downloadable as an app for mobile devices, it is both a site-specific public artwork and a digital archive of species that live within the park. It pushes the utility of augmented reality and technological tools to transform our spatial understanding of the natural world. The commission expands upon Kudsk Steensen’s practice of merging the organic, ecological, and technological in the building of complex worlds in order to tell stories about our current environmental reality.

Keywords: Deep Listener, Serpentine Galleries, Animation, Interaction Device, Augmented Reality, Slow Media

→ Class Presentation

Reference

發表於 110下學期 | 在〈2022.03.01 陳思豪報告: The Deep Listener〉中留言功能已關閉

110 學年下學期課程介紹

科技藝術書報討論

任課教師:許素朱 教授(+其他老師)
<xn.techart@gmail.com>
課程網址 : http://www.fbilab.org/nthu/aet/seminar
上課時間 : 研究所 週(二) 18:30AM-20:30PM
上課地點:校本部 綜二603教室 (跨院碩辦公室旁)

Course Description(課程目標)
本課程帶領學生①了解國際科技藝術創作與研究的最新趨勢,從該領域最重要之學術會議期刊藝術展覽活動,選讀藝術論文及作品探討報告。課程由授課老師指定閱讀論文或科技藝術作品清單給學生選讀,從報告與討論交流中,讓學生獲得科技藝術領域的作品創作、技術研發能力。②課程中會擇期邀多方領域之師資一同參與,給予跨領域面向的講演或討論交流。③課程也會授予論文撰寫及投稿的關鍵能力。 (More …

發表於 110下學期 | 在〈110 學年下學期課程介紹〉中留言功能已關閉