background preloader

Downloads for Max/MSP/Jitter including the MMJ Depot

Downloads for Max/MSP/Jitter including the MMJ Depot

CataRT - IMTR From IMTR Real-Time Corpus-Based Concatenative Synthesis by Diemo Schwarz, IMTR Team, Ircam--Centre Pompidou, and collaborators. See also: Description The concatenative real-time sound synthesis system CataRT plays grains from a large corpus of segmented and descriptor-analysed sounds according to proximity to a target position in the descriptor space. CataRT is implemented in MaxMSP and takes full advantage of the generalised data structures and arbitrary-rate sound processing facilities of the FTM and Gabor libraries. CataRT allows to explore the corpus interactively or via a target sequencer, to resynthesise an audio file or live input with the source sounds, or to experiment with expressive speech synthesis and gestural control. CataRT is explained in more detail in this article and is an interactive implementation of the new concept of Corpus-Based Concatenative Synthesis. Download CataRT is offered as free and libre open source software in the spirit of the GNU GPL. Requirements License

spatium · tools for sound spatialization :: home Tutorial 25: Tracking the Position of a Color in a Movie There are many ways to analyze the contents of a matrix. In this tutorial chapter we demonstrate one very simple way to look at the color content of an image. We'll consider the problem of how to find a particular color (or range of colors) in an image, and then how to track that color as its position changes from one video frame to the next. This is useful for obtaining information about the movement of a particular object in a video or for tracking a physical gesture. In a more general sense, this technique is useful for finding the location of a particular numerical value (or range of values) in any matrix of data. The object that we'll use to find a particular color in an image is called jit.findbounds. Here's how jit.findbounds works. In this example we use the jit.qt.movie object to play a movie (actually an animation) of a red ball moving around. Minimum and maximum values specified for each of the four planes • Click on the toggle to start the metro. Playing Notes Playing Tones

Max Objects Database Black Box Driven Development in JavaScript Sooner or later every developer finds the beauty of the design patterns. Also, sooner or later the developer finds that most of the patterns are not applicable in their pure format. Very often we use variations. We change the well-known definitions to fit in our use cases. What is a black box? Before to go with the principles of the BBDD let’s see what is meant by a black box. In science and engineering, a black box is a device, system or object which can be viewed in terms of its input, output and transfer characteristics without any knowledge of its internal workings. In programming, every piece of code that accepts input, performs actions and returns an output could be considered as a black box. This is the simplest version of a BBDD unit. We have an API containing all the public functions of the box. Now that we know what a black box is, let’s check out the three principles of BBDD. Principle 1: Modulize everything Every piece of logic should exist as an independent module. Summary

FTM&Co - ftm OSC for Arduino and Embedded Processors This project provides an OSC library for Arduino, Teensy and related embedded processor platforms. It is the most feature-rich implementation of the OSC encoding for these platforms. Features: Supports the four basic OSC data types (integers, floats, strings, and blobs) and some common type extensionsSend and receive messages over any transport layer that implements the Arduino’s Stream Class such as Serial, EthernetUdp, and more. Address pattern matchingDynamic memory consumptionCompatible with Arduino 1.0 API and coding styleIncludes many examples for various host applications including Max/MSP and PD Start by downloading the OSC for Arduino library from github. OSC for Arduino provides two classes: OSCMessage and OSCBundle.

Making an infra red triggered sampler, using the Johnny Chung Lee method! You definitely want to use cv.jit.sort to index your blobs, or design your own sorting method – sort seemed to work well for me, although I never developed a fully functional system. You may run into problems if the movement is too extreme between frames, but for slower gestures it should work well. Other approaches might be (if using normal video rather than IR) to colour your blobs in such a way that they can be identified by some jitter native calculation of average colour. …. I remember that by big problem was with blobs merging (at which point sort will flake out) so that was more of a concern for me than reliable sorting directly – I was thinking a lot about a decent filter to separate blobs as in the case of LEDs the glare on the camera can create haloing around each bulb. Not sure if that’s of any use, but those were my thoughts at the time. Alex

Research The following is a comprehensive bibliography of CNMAT publications. For many of our publications full-text and PDF formats are provided, as well as links to automatically find the publication in your local library. New Tools for Aspect-Oriented Programming in Music and Media Programming Environments, MacCallum, John; Freed, Adrian; Wessel, David , International Computer Music Conference, 14/09/2014, Athens, Greece, (2014) Abstract | PDF o.io: a Unified Communications Framework for Music, Intermedia and Cloud Interaction, Freed, Adrian; DeFilippo, David; Gottfried , Rama; MacCallum , John; Lubow , Jeff; Razo, Derek; Rostovtsev, Ilya; Wessel , David , ICMC, Athens, Greece, (2014) Abstract Decoding Auditory Attention (in Real Time) with EEG, Lalor, E.; Mesgarani, N.; Rajaram, S.; O’Donovan, A.; Wright, J.; Choi, I.; Brumberg, J.; Ding, N.; Lee, K. Synthesizing classic recording microphones characteristics using a spherical microphone array (A), Peters, Nils; Schmeder Andrew W. , J.

Related: