Problem Description
Expressing ourselves as humans has its roots in, and inevitably requires interaction with, the material world, a realm governed by laws of physics. Although computers can aid us in our artistic endeavors, our intervention in their mechanical processes is what molds digital ephemera into the tangible media we consume. The guitar player has a high level of intuitive, physical control over the timbre of their instrument. Even a child may pick up a ukulele, and intuitively grasp the concept of vibrato within minutes. However, programming a synthesizer to sound natural, even human, can be a difficult concept to wrap one's mind around, let alone execute. Creating an ever evolving soundscape while maintaining physical control over a synthesized instrument is the problem we wish to address in this project. We want to introduce a visual, physics-based control method to synthesized sound. To achieve this we will simulate the diffusion of liquids in real-time to create a dynamic mixture of fluids. The homogeneity, position, color, velocity, and other properties of the mixture will be sampled by taking cross-sections, and used to generate, and add effects to synthesized waveforms. For example, by pouring a denser liquid into another, and keeping a horizontal sampling-plane fixed in the middle of the container, the generated waveform will evolve over time as the concentration of the dense liquid in the sample-plane waxes and wanes on the liquid's way to the bottom of the container. In doing so, we hope to create an instrument equally expressive and automated, whose playing experience evokes discovery and human-computer collaboration so much as a sense of natural pause and palpable interaction.
Where the problem grows challenging is in its integration of the technical and the humanistic: creating something deeply connective is often simple--something so organic as a doodle, a conversation, or local foliage can create a sense of grounding like nothing else can--but can grow complex when it becomes a digital experience reliant on novel audiovisual sensations. In designing a specific model for our world and our instrument, we’ve developed a system reliant on difficult forms of signal processing and physical modeling: diffusion simulation, spectral analysis, and possibly a simplified alternative to ray-tracing. We further acknowledge that the interplay between our system and any external tools we use may be complex: we intend to use MaxMSP, to which we are still determining how to route control data; and to render the experience in real-time, we may use rendering software built in class, Blender, or some other system, with whose data structures we need to learn how to interface. Keeping this experience interactive presents further challenges: the performance of the software or the quality of materials rendered may suffer as a result.
Nonetheless, these challenges are also what bring to life the project’s value: to create an immersive, expressive experience, a testament to the growing accessibility and endless possibility of computer-generated art. In expanding the range of how we experience and create music, we hope to inspire further creativity, and create an interactive world that stands on its own as a realm for creative exploration and delight.
Goals and Deliverables
We’ve separated our system into a series of milestones, whose functionality increases the interactivity and musicality (and lack of realism--many of these are stretch goals) of the system as we continue to build it:
- Liquid diffusion: Build a physics simulation of the diffusion of multiple liquids.
- Properties of the liquid:
- Mass
- Color
- Opacity
- Viscosity
- Surface Tension
- Stretch: Emitted light
- Basic rendering: Display diffusing liquids in a bounded volume using an appropriate renderer (to be researched). Include basic lighting to highlight the patterns developing within the liquid.
- Scene model: Design a model for the scene with a glass container and a dark surrounding environment.
- Liquid interaction: Enable users to customize liquids (and save preset liquids), and pour these liquids into the container to watch them diffuse.
- Musical personality: Build an events and analysis system which extracts desired features from the physics simulation as a series of musical parameters. Integrate these parameters with MaxMSP to produce a synthesized sound dependent on the fluid’s activity.
- Ideal rendering: Improve rendering to refract light through the glass container, reflect light against the surrounding walls in the environment, and allow liquids to act as colored light sources.
- Additional interactivity: A series of stretch goals would include the musicality, interactivity, and possibly visual intrigue of the environment.
- Add buoyant spheres to the body of liquid, each of which acts as either a resonant filter or some other effect.
- Add a sweeping, glowing plane to the scene, which sweeps over the liquid and its container at regular intervals (or may be manually controlled by the user) to trigger the spheres’ actions (and possibly other time- or position-dependent musical changes); acts as a step sequencer for the spheres.
- Enable elastic mesh deformation to excite guitar plucks using the Karplus-Strong algorithm (extremely easy to implement in MaxMSP).
- Allow fluids to spill over the edges of their container (if not already implemented), possibly with musical effects fired on contact between fluid particles and the environment’s floor.
- Allow interactions with the container, such as shaking.
- Enable the user to fly around the container (change viewing angle and possibly distance), changing filtration, amplitude, and stereo panning of the sound to match position (e.g. if you’re farther away from one section of the fluid, its sound will be quieter and have some of its high end cut off; if it’s off to the right of the scene relative to the camera position, the sound will be panned closer to the right speaker/headphone/audio channel).
Realistically, we plan to deliver liquid diffusion, basic rendering, the scene model, liquid interaction, and musical personality in a real-time demo, with some of these possibly simplified to fit time constraints (musical personality, for example, may be less built out or not at all, and the final product may be merely liquid diffusion rendered; we hope to push beyond this, but acknowledge the possibility). We hope to deliver all of the milestones listed above, with some points of additional interactivity possibly excluded. We’re very excited about the direction of this project, and are eager to explore as much of this project’s interdisciplinary and interactive potential as we can!
Our demo will feature a GUI that people can interact with in real-time, and a pre-rendered video demonstrating the audio-visual outcome of our CG instrument. If our demo fails to work in real-time, we plan to simulate a series of timed human inputs, and render these as a video of what we would have been able to do in real-time otherwise.
We plan to answer with our analysis how we might expressively model physical or visual inputs as musical signals, and how reliant these musical signals should be on the intrinsic physical properties of the inputs to be sufficiently expressive, intuitive, interactive, and engaging. In particular, we'll explore this with diffusing liquids and any interactions between them and the user which we may create, as explained above.
Though performance of the system doesn't have terribly specific benchmarks, we'll test the early stages of our project against images of diffusion (or our own experiments at home, whether pouring creamer into coffee or droping dye into water) for realism, with minimal blurring required. Past this point, we'll test our musical engine with at first hand-controlled parameters (like sliders, numerical inputs, etc.) to examine its sound's thematic connection to our physical model and its personality. Beyond this point, benchmarks will likely involve the intensity of any given parameter--whether an extracted musical parameter has a sufficient effect on the system, and if not, whether we'd like to tune it or scrap it. The system will require a good bit of play and experimentation to test.
Inspiration and Desired Aesthetic
Fluid and sphere sequencer concept
Colored ink diffusion in a beaker source
Glass cubes refracting rainbows, hopefully like our glass container source
Schedule
Week 1 (4/12 - 4/18)
- Simulate liquid diffusion
- Basic rendering
- Create scene
Week 2 (4/19 - 4/25)
- Mixture analysis and event system
- Sampling mixture
- Sound generation
- MaxMSP integration
Week 3 (4/26 - 5/2)
- Stretch Goals (in order of importance)
- Buoyant, musical spheres
- Sweeping plane
- Spills
- Camera perspective effects
Week 4 (5/3 - 5/9)
- Integrate with Blender for a final render
Initial Resources
Tools
We'll be using Cycling74's MaxMSP to create our musical engine, which we'll compile as a standalone application linked to the visual engine we write. We'll likely use a rendering engine for both real-time renders and clean, possibly ray-traced stills--we plan to use Blender for the latter, but are unsure of what to use for the former as of yet (Blender, Unity, WebGL (possibly via three.js), and the renderer(s) we've built in class all seem like good options, though we'd love input on this).
Articles
The following articles constitute our first round of secondary research, with likely more to come.
- Interactive Visual Simulation of Dynamic Ink Diffusion Effects link
- Fluid Simulation For Computer Graphics: A Tutorial in Grid Based and Particle Based Methods link
- Example-Guided Physically Based Modal Sound Synthesis link
- Particle-Based Fluid-Fluid Interaction link
- Sound Rendering for Physically Based Simulation link
- Toward Animating Water with Complex Acoustic Bubbles link
Milestones
- Implementation Walkthrough: A conceptual explanation of the methods of implementation. Main info is in the first minute or so.
- Created 3 new classes: Fluid, FluidParameters (fluid.h and fluid.cpp), and FluidSimulator (fluidSimulator.h and fluidSimulator.cpp)
- Copied cloth.h, and cloth.cpp files, and adapted code to create fluid.h, and fluid.cpp. These files added to ~/src.
- FluidParameters class
- bool enable_physics (placeholder until we implement smoothed-particle hydrodynamics https://en.wikipedia.org/wiki/Smoothed-particle_hydrodynamics. See simulate method in Fluid class.)
- double density
- double viscosity
- double surface_tension
- Fluid class
- Methods
- Constructor
- double width (x-axis)
- double height (y-axis)
- double depth (z-axis)
- int num_width_points
- int num_height_points
- int num_depth_points
- buildGrid()
- Adds num_width_points * num_height_points * num_depth_points point masses to point_masses at locations spanning width, height, and depth.
- simulate()
- Compute total external force acting on each point mass.
- TODO: Compute interactions between fluid particles (Smoothed-particle Hydrodynamics).
- User Verlet integration to compute new point mass positions.
- Handle collisions with other primitives
- reset()
- build_spatial_map()
- TODO
- hash_position()
- TODO
- Changes made to ClothSimulator files to make FluidSimulator class
- clothSimulator.h and fluidSimulator.h
- Almost identical. All Cloth/ClothParameters classes and object names changed to Fluid/FluidParameters classes and object names.
- clothSimulator.cpp and fluidSimulator.cpp
- Changed all references to Cloth/ClothParameters objects into references to Fluid/FluidParameters objects.
- drawContents()
- Removed switch statement for drawing the cloth.
- Added loop over all point_masses in fluid.
- Create a sphere at pm->position
- Render sphere.
- Removed drawWireframe(), drawNormals(), and drawPhong().
- Camera calculation methods unchanged
- Event handling methods unchanged.
- initGUI()
- Changed buttons, float boxes, and sliders to correspond to the fluid parameters.
- Changes made to main.cpp
- All Cloth/ClothParameters classes and object names changed to Fluid/FluidParameters classes and object names.
- loadObjectsFromFile()
- Parse input for fluid properties and parameters instead of cloth.
- main()
- Something we need to fix:
- Changes made to CMakeLists.txt (in src)
- Added fluid.cpp and fluidSimulator.cpp so the fluid and FluidSimulator objects compile.
- Line 432: We have been unable to edit debug and launch settings, so filename should be inserted here.
Final Project Conclusions/Deliverable
Abstract
In an exploration of graphical surfaces as instruments, our group set out to build a fluid simulator that synthesizes sound with an emphasis on expressivity, intuition, and immersion. Our fluid simulator is a particle-based Smooth Particle Hydrodynamics (SPH) system built atop our ClothSim project. As the simulation runs, an event handler in the engine communicates with MaxMSP using Open Sound Control (OSC) to modulate a series of synthesis modules, yielding an ever-evolving, generative landscape suggestive of fluid in its timbre.
Video
Fluid Simulation
The most heavily technical aspect of this project is our fluid simulation. Our fluid simulation involved building off of our ClothSim particle simulation project in that we have fluid particles each with specific properties, which we then ran through a main simulator that articulates the physics of each object. We simulate the forces on the liquid using discretized updates for surface tension, drag, viscosity, and a coulomb force.
We implemented drag the usual way that drag comes up in physics, via an opposing acceleration dependent on velocity squared, density, cross-sectional area, and a given drag coefficient, where the velocity is relative to the surrounding fluid. Density is a parameter in our particle, cross sectional area is based on the radius of the particle, set the drag coefficient to 1, and used a relative velocity defined as the current particle’s velocity minus the average velocity of the neighboring particles. Both surface tension and viscosity forces are implemented through a Gaussian kernel based on several tutorials and research papers on how to properly implement. We used coulomb force as a repelling force between particles, first checking for a collision between two particles and then applying the relevant force. No further correction was performed to separate particles upon collision.
Underneath it all we carry over a lot of useful ideas from ClothSim: Verlet integration, spatial hashing for calculations between neighbors, and the looping format for simulate
. There is a lot of useful groundwork given by our ClothSim project as well as the nanogui configuration that we have modified to our liking.
Our first step was to implement four new classes: FluidParameters
, FluidParticle
, Fluid
, and FluidSimulator
. FluidSimulator
is based on ClothSimulator
> from Project 4, and contains most of the GUI. The class also handles events, loads and binds shaders, and loads then renders all objects in the scene. Fluid
was originally based on Cloth
, but instead of a vector of PointMasses
that are connected by springs, Fluid
contains a vector of FluidParticles
, and a vector of FluidParameters
. FluidParticles
are identical to PointMasses
, but contain a reference to an element of the FluidParameters
vector in the Fluid
class. This allows particles to share the same sets of properties while allowing for multiple kinds of fluids in the simulation. When Fluid
is initialized, a volume is populated by evenly spaced (and then slightly randomly displaced) FluidParticles
with references to the “BASE” FluidParameters
.
As FluidSimulator
steps through time, the simulate
method is called on the Fluid
. In simulate, we start by applying all external forces to each FluidParticle
. We then build a spatial map of the particles by hashing each particle’s position, just like in Project 4. Next, we loop through each particle, calculating and summing the SPH forces between particles. Finally, we perform Verlet integration, and handle collisions with objects and with the container.
The next challenge was to be able to add new fluid particles to the simulation, and change fluid parameters on the fly. We encountered a number of problems while implementing this, and had to change our approach multiple times. At first we used a button with a callback event that would create a new instance of a FluidParameters
object based on a number of float boxes in the GUI. A reference to this object was passed into a new FluidParticle
that was added to the vector in Fluid
. However, this prevented us from changing the properties of many particles at once because each newly added particle had a unique FluidParameters
object associated with it. In the end we added 2 preset FluidParameters
to the FluidParameters
vector in the Fluid
class. The values of these parameters can be changed in the GUI in order to change all particles with a reference to that FluidParameters
object, and an “Add particle” button will add a particle with a reference to the currently selected preset.
Audio Engine
We designed our audio engine with more of a focus on graphics as a form of input, control, or expression rather than a mathematical model of a sonic environment. As a result, rather than using the environment itself to synthesize sound (as might be done using real-time audio ray-tracing for sound waves), we extracted data and tracked events to modulate musical paramters with a large impact on the overal timbre of our engine. This data is bundled and sent using the library oscpack, which uses UDP sockets hosted locally to send packets to MaxMSP.
The engine is comprised of 6 sonic modules, all of which rely on synthesized sound rather than samples, with a general goal of creating a soundscape reminiscent of the sound of liquid in motion, collision, and wind. The first of these modules and the heart of the patch is a collection of 10 physical modeling synthesizers, each of which uses a sample (the only use of samples in the project) to excite a hammer modeled by the Karplus-Strong algorithm (commonly used for modeling realistic guitar strings), which in turn excites a resonant filter tuned with certain parameters. The model dictating the timbre is the same for all instances of the synth, save for the f0
value, roughly equivalent to pitch; this value is randomized before being fed into each synth to determine one pitch per synth; these pitches can be completely randomized within a given range or aligned with a C minor scale. Each synthesizer also randomly modulates itself: the length of each short burst of sound is termined by a self-randomizing metronome. Within this module, the following parametres are modulated by the simulation:
- Hammer mass and hammer stiffness linearly increase with the average particle volume, making the striking sound harsher.
tau0
(suggestive of material, e.g. a metal plate vs. a glass chime vs. a resonant tube) and harmonicity increase linearly with the number of collisions in a given timestep, yielding an increase in perceived brightness.
The physical modeling synthesizers all feed into a series of delays feeding into each other, yielding a wet, echoing sound that's often darker or lower frequency than the original sound. In some cases, this can yield an almost rain-like effect. The amount of delay linearly increases with the number of particles.
The delayed signal (plus some quantity of the original) is then piped into a granular synthesizer to create a granular freeze, wherein a large number of fairly short grains of audio are requested from the system in extremely rapid succession, with a starting point in the audio becoming a frozen moment of sound by gently modulating the start position around it to play similar grains. This yields a sometimes scratchy, sometimes icy sound that often resembles a faucet running. A few parameters are modulated by the simulation:
- The average y (vertical) value of the particles yields a linear increase in the grain start position, yielding an upward motion as particles splash around.
- The average velocity magnitude of the particles linearly increases the amplitude of the granular synthsizer's output signal; the chaos of fast-moving particles allows more of what can at times be a chaotic signal (when it sounds like scraping) through.
Consistent throughout the simulation is a wind engine, which simulates wind sounds by filtering noise through a resonant filter whose Q and center frequency constantly interpolate between two randomly selected values every 150 ms. No modulation is performed by the simulator on this portion of the patch.
"Noisy Diracs" is the fifth module: a series of resonant filters filtering noise, with center frequencies forming a chord (C min 7 add 9). These create a harmonious, bright wind. The amplitude of this signal increases linearly with the luma value (brightness) of the simulation's background color, controllable via a color wheel in the GUI.
A simple triangle wave with a short attack and slightly longer release acts as an indicator of having added a particle. The signal is sent through a short delay that resembles a reverb, and iterates over the notes in a given chord (again C min 7 add 9) each time the event is received.
Finally, the system's master output fades in or out with an exponential easing upon receiving a play or pause message.
Results and Exploration
We originally set out to create a fluid diffusion simulator and synthesizer. Although we did not model fluid diffusion, our model yielded some interesting behavior: more than a liquid, with the forces given, it seems to act as either a gas or rapidly congeal together. This behavior is dependent on the smoothing parameter for our Gaussian kernel, h
: for lower values, the particles are more likely to repel each other and swirl around the container's perimeter at a high y value. For higher values, the particles quickly glom together and move minimally. This yielded some markedly distinct timbres given the expressive parameters made controllable: less active particles trended towards a darker, less active sound, while gas-like particles trended towards a chaotic sound with a brighter quality.
The simulation runs fairly slowly on ClothSim's architecture, but quickly enough to feel like a sufficiently interactive (and possibly immersive) audiovisual landscape. Going forward, building atop a more widely used, better optimized engine might be helpful; we briefly considered using WebGL via three.js, but had already implemented the majority of our deliverable.
We also found that some forces were much more computationally complex to calculate--pressure and vorticity in particular presented a challenge worth exploring in future projects, or worth building on with the help of an engine like SPlisHSPlasH.
In future endeavors of a similar variety, it may be interesting to blend physically modeled environments (as discussed at the start of the section on our audio engine) with synthesizers to build a more realistic but equally imaginative and creatively controllable/explorable audiovisual environment. Compiling traced audio rays into filters for a given physical model, using color wavelength distributions to inform the chosen frequencies or pitches, and plucking elastic meshes to create resonant cavities of different shapes are all directions we have considered.
Conclusion
Though we ultimately pivoted from our original plan to create a synthesizer based on fluid (particularly ink-like) diffusion modeling, we successfully modeled fluid-like behavior using a particle-based simulation of SPH forces, with our audio engine's high-level parameters mapped to expressive data about the fluid particles. It's in fact often pivots that lead to unexpected and exciting discoveries: our particles effectively created a sonic world whose timbre we did not anticipate but lent itself well to the simulation, and whose evolution was sufficiently continual and responsive to communicate a link between the graphics and the audio.
Over the past few weeks, from a technical standpoint, we've learned a lot about the pros and cons of particle- vs. grid-based simulation, how to model discrete time updates for various forces, and what drives the interaction between fluid particles. We also learned about how others have approached this problem: whether by choosing a more simple or flexible model for each fluid's parameters (the same viscosity and surface tension with different colors to show the gradient of motion), setting other constraints on the simiulaton (e.g. some extremely effective and visually appealing 2D models), or going all-in with multiple modular forces from various papers, either reducing or embracing (and carefully organizing) complexity was key. From a less technical standpoint, this project was reminder of very real limitations: whether it be a global pandemic, an ambitious scope, or technical setbacks, keeping the project achievable was important all throughout the proocess. Nonetheless, having the opportunity to explore--whether playing with our simulation, working with different implementations of forces, reading about or ogling at beautiful simulations similar to our goal, or tuning our audio engine's parameters, each step deepened our learning and overall sense of connection or integration between the many elements that make a successful physics simiulation or tell a compelling audiovisual narrative. We also leveraged the diversity of our backgrounds in computer science, mechanical engineering, and electronic music; this was useful in our work, but also created a rich forum for academic conversation as we explored project pathways.
References
- Interactive visual simulation of dynamic ink diffusion effects link
- SPlisHSPlasH link
- Fluid Simulation For Computer Graphics link
- List of fluid dynamic experiments link
- 3D grid based fluid doc link
- Web-GL fluid simulator link
- Physically Based Sound Synthesis link
Contributions
- Colin Acton:
- Handled the base of our code (adapting ClothSim) for this project.
- Coded a majority of the back end implementation of the project, such as the physics/simulation aspect and the GUI.
- Contributed to presentation slides and final deliverable.
- Jessie Mindel:
- Developed sound engine in MaxMSP and implemented sound integration with the simulator.
- Helped with data structures and some forces in the physics simulation (hats off to Colin!).
- Wrote HTML/CSS for the webpage.
- Contributed to presentation slides and final deliverable.
- Wendy Vincent:
- Contributed to presentation slides and final deliverable.
- Researched force update methods and alternative approaches.
- Worked with group members on debugging and implementation.