background preloader

AF Project

Facebook Twitter

Akeru beta 3.2. Capture Video from File or Camera. Capture Video From File You can download this OpenCV visual c++ project from here. (The downloaded file is a compressed .rar folder. So, you have to extract it using Winrar or other suitable software) This is how I play a video using my OpenCV program New OpenCV functions which are not found earlier are explained here VideoCapture::VideoCapture(const string& filename) This is one of few constructors available in VideoCapture class. The destructor of this class will deallocated any associated memory with this object. Bool VideoCapture::IsOpened() If the previous call to VideoCapture constructor is successful, this method will return true. It is essential to check that whether the VideoCapture initialize successfully. You can change some properties of VideoCapture object. Parameters : int propID - This argument specify the property you are going to change. DoubleVideoCapture::get(int propId) This function returns the value of the property which is specified by propId.

Parameters - waitKey(30) Beaglebone: Video Capture and Image Processing on Embedded Linux using OpenCV | derekmolloy.ie. Introduction In the video below I look at how you can get started with video capture and image processing on the Beaglebone. It is an introductory video that should give people who are new to this topic a starting point to work from. I look at three different distinct challenges: How do you capture video from a USB webcam under Linux – I do this using the capture.c source code that uses Video4Linux to capture a raw stream from the USB camera. This raw stream is then wrapped with a H264 header using FFMPEG.How do you capture image frames from a USB webcam under Linux – I do this using the grabber.c source code that again uses Video4Linux to capture raw video frames in the uncompressed PPM format.How do you use OpenCV to capture and image process frames so that you can build computer vision applications under Linux on the Beaglebone – I do this using the boneCV.cpp program as described below.

In this video I use a Logitech C920 and the Beaglebone Black. The Video Molloy, D. Source Code #! Android - How can I calculate the distance to the target? Android - How can I update my UI in response to tracking events? Android - How do I add another target to the Video Playback sample. Torcellite/imageComparator. Studio. Using Ultrasonics for Detection People. All of the MaxBotix® Inc. ultrasonic sensors are capable of detecting people. The range that the MaxSonar® family of sensors is capable of detecting people varies from sensor to sensor. When choosing a sensor that you want to use for people detection, we recommend viewing the beam plots of the sensor for the approximate range of people detection. People detection generally falls between grid pattern A and grid pattern B when the sensor is perpendicular to the person.

Typically a person will reflect the same amount of ultrasonic energy as a 1 inch diameter dowel. Even though people are physically large targets, humans are a soft target that absorb a large amount of ultrasonic sound and reflect only a fraction of the ultrasonic sound. It has been observed that because of the soft target nature of people, occasional range readings may be incorrect. People detection is not perfect. Industries: LV-MaxSonar-EZ Ultrasonic sensor test.

I've been thinking on for some time how I could keep the quadcopter "parked" at a height set out by the user and how the quadcopter could detect and avoid obstacles. After some research I decided to use an ultrasonic sensor. I also could have chosen to use infrared sensors, but the ultrasonic sensors have a longer range and can be used both inside and outside, there are a variety of configurations while infrared sensors depend on lighting conditions and the color of the surfaces in front of them, their characteristics are more suitable to detect objects than to do accurate measurements of distances to objects.

In the market there are available multiple ultrasonic sensors but the sensors that seemed easier to use without having to spend much money is from the MaxBotix company. To test and see how the ultrasonic sensors work, I got two ultrasonic sensors (LV-MaxSonar-EZ0 and LV-MaxSonar-EZ1) that are shown in figure 1. XL-MaxSonar-AE0(MB1300) The Arduino OBD-II Adapter works as a vehicle OBD-II data bridge for Arduino with open-source Arduino library provided. Besides providing OBD-II data access, it also provides power supply (converted and regulated from OBD-II port) for Arduino and its attached devices.

[b]Features[/b][list][li]Directly pluggable into vehicle’s OBD-II port[/li][li]Serial data interface (UART for [url= or I2C for [url= efficiency DC-DC module for 5V/3.3V DC output up to 2A[/li][li]Supporting CAN bus (used by most modern cars), KWP2000 and ISO9141-2 protocols[/li][li]Accessing all OBD-II PIDs provided by the vehcile ECU[/li][li]Embedded 3-axis accelerometer, 3-axis gyroscope and temperature sensors ([url= [b]Getting Started[/b] [b]OBD Library[/b] CAPTEUR : HC-SR04. *Difficulté : ★★★ /10 Nous allons dans ce tutoriel, vous parler de capteur, et plus précisément de capteurs Ultra-sons. Ce tuto vous apprendra les bases de l’utilisation du capteur HC-SR04, disponible sur de nombreux site web ( env4€ ).

Nous réaliserons un « radar » de sorte à apprendre la base de l’utilisation de ce capteur. Vous pourrez donc par la suite donner libre cours a votre imagination, rajouter par-exemple un écran LCD afin d’afficher la distance d’un objet Ce qu’il vous faut -Une carte Arduino. Ici, nous utiliserons cette jolie Mega 2560 mais, bien entendu cela fonctionne pour n’importe quelle carte ! -Deux led, je vous laisse choisir la couleur que vous voulez. -Deux résistance ( 180-560homs ) Mais ici je ferais le montage avec deux R.Ajustable -Notre fameux HC-SR04 !! (Il existe d’autres modèles bien entendu) -Plains de fils ( M-F, M-M ) -Une Breadboard Le montage Au niveau des branchement on a « en gros » : Programmer tout ça ! /* [HC-SR04 Capteur de distance] Résultat : Good game. Maker Movement | LinkSprite Learning Center. Use Ultrasonic Sensor to Measure Distance on pcDuino | LinkSprite Learning Center. Www.mon-club-elec.fr/mes_downloads/doc_pcduino/4a.pcduino_personnalisation_du_systeme_de_base_v2ok.pdf.

PcDuino Carte pcDuino V2. La carte pcDuino V2 est un mini PC à hautes performances pour un prix très abordable équipé d'un module Wifi et supportant Ubuntu et Android ICS. Il suffit de raccorder la carte pcDuino V2 à une alimentation 5 Vcc, un clavier, une souris et un écran pour être opérationnel. Le pcDuino V2 dispose d'une sortie vidéo HDMI et est compatible avec toute télévision ou moniteur équipé de cette interface HDMI.

Cette carte a été conçue pour faciliter le développement de projets pour la communauté open-source et peut utiliser la plupart des shields compatibles Arduino 3,3 Vcc grâce aux connecteurs latéraux (nouveauté par rapport à la version V1). Une API a été développée et permet aux utilisateurs du pcDuino d'utiliser le langage de programmation Arduino. Remarque: la carte pcDuino V2 possède un module Wifi intégré et les connecteurs latéraux sont compatibles avec les shields Arduino (attention aux niveaux des entrées/sorties, les shields doivent être compatibles 3,3 Vcc). Platform - Vuforia. Template Matching. Goal In this tutorial you will learn how to: Use the OpenCV function matchTemplate to search for matches between an image patch and an input imageUse the OpenCV function minMaxLoc to find the maximum and minimum values (as well as their positions) in a given array.

Theory What is template matching? Template matching is a technique for finding areas of an image that match (are similar) to a template image (patch). How does it work? We need two primary components:Source image (I): The image in which we expect to find a match to the template imageTemplate image (T): The patch image which will be compared to the template imageour goal is to detect the highest matching area: To identify the matching area, we have to compare the template image against the source image by sliding it: By sliding, we mean moving the patch one pixel at a time (left to right, up to down).

The image above is the result R of sliding the patch with a metric TM_CCORR_NORMED. Good question. Code What does this program do? The CImg Library - C++ Template Image Processing Toolkit. pHash.org: Home of pHash, the open source perceptual hash library. OpenCV | OpenCV. VXL - C++ Libraries for Computer Vision. Integrating Vision Toolkit. The Integrating Vision Toolkit (IVT) is a powerful and fast C++ computer vision library with an easy-to-use object-oriented architecture. It offers its own multi-platform GUI toolkit. Availability[edit] The library is available as free software under a 3-clause BSD license. It is written in pure ANSI C++ and compiles using any available C++ compiler (e.g. any Visual Studio, any gcc, TI Code Composer).

It is cross-platform and runs on basically any platform offering a C++ compiler, including Windows, Mac OS X and Linux. The included GUI toolkit offers implementations for Windows (Win32 API), Linux (GTK), Mac OS X (Cocoa) and Qt. The computer vision company Keyetech offers platform specific optimizations of various IVT functions with the Keyetech Performance Primitives (KPP), which are automatically loaded by the IVT. History[edit] The IVT has been developed at the formerly named University of Karlsruhe (TH), now Karlsruhe Institute of Technology (KIT). Features[edit] IVT's features include: OpenCV. History[edit] Advance vision research by providing not only open but also optimized code for basic vision infrastructure.

No more reinventing the wheel.Disseminate vision knowledge by providing a common infrastructure that developers could build on, so that code would be more readily readable and transferable.Advance vision-based commercial applications by making portable, performance-optimized code available for free—with a license that did not require to be open or free themselves. The first alpha version of OpenCV was released to the public at the IEEE Conference on Computer Vision and Pattern Recognition in 2000, and five betas were released between 2001 and 2005. The first 1.0 version was released in 2006. In mid-2008, OpenCV obtained corporate support from Willow Garage, and is now again under active development. A version 1.1 "pre-release" was released in October 2008. The second major release of the OpenCV was on October 2009. Applications[edit] OpenCV's application areas include: IN2AR - Flash Augmented Reality Engine | Introduction | AR support | AR forum | augmented reality | AR community | Augmented Reality knowledge.

Mixare | Free Open Source Augmented Reality Engine. Serving Raspberry Pi with Flask - Matt Richardson, Creative Technologist. The following is an adapted excerpt from Getting Started with Raspberry Pi. I especially like this section of the book because it shows off one of the strengths of the Pi: its ability to combine modern web frameworks with hardware and electronics. Not only can you use the Raspberry Pi to get data from servers via the internet, but your Pi can also act as a server itself. There are many different web servers that you can install on the Raspberry Pi. Traditional web servers, like Apache or lighttpd, serve the files from your board to clients.

Most of the time, servers like these are sending HTML files and images to make web pages, but they can also serve sound, video, executable programs, and much more. However, there's a new breed of tools that extend programming languages like Python, Ruby, and JavaScript to create web servers that dynamically generate the HTML when they receive HTTP requests from a web browser. Flask Basics In order to install Flask, you'll need to have pip installed. <! WiFiWebServer.

Learning Examples | Foundations | Hacking | Links Examples > WiFi Library WiFi Web Server In this example, you will use your WiFi Shield and your Arduino to create a simple Web server. Using the WiFi library, your device will be able to answer a HTTP request with your WiFI shield. After opening a browser and navigating to your WiFi shield's IP address, your Arduino will respond with just enough HTML for a browser to display the input values from all six analog pins.

This example is written for a network using WPA encryption. Hardware Required Arduino WiFi Shield Shield-compatible Arduino board (optional) Six analog sensors attached to Analog in Pins 0-5 Circuit The WiFi shield uses pins 10, 11, 12, and 13 for the SPI connection to the HDG104 module. You should have access to a 802.11b/g wireless network that connects to the internet for this example.

For networks using WPA/WPA2 Personal encryption, you need the SSID and password. WEP network passwords are hexadecimal strings known as keys. Lire des entrées analogiques sur un Raspberry avec un circuit ADC : le MCP3008 - Slog. Convertisseur analogique-> numérique MCP3008 Le Raspberry pi dispose de broches GPIO pouvant lire ou recevoir des signaux digitaux, mais aucun pour les signaux analogiques.

Quelle est la différence? Un signal digital, ou numérique, et composé d’une suite de 0 et de 1, qui pourront indiquer diverses informations selon le codage employé. Un signal analogique est quand à lui codé par des variations de la tension du signal : quand on lit 3V, c’est une information différente de 2V, et encore différente de 0V. Chaque valeur à un sens, contrairement au signal digital ou on peut avoir 0 ou 1 (pour un signal sur 3.3v, toutes les valeurs en dessous de 1.5V seront vues comme des 0, et toutes les valeurs au dessus de 2V seront vues comme des 1, par exemple.

Les valeurs seuil varient selon la puce) uniquement. Nous utiliserons donc une puce de convertir des signaux analogiques en signaux numériques : un DAC (Digital to Analog Converter, convertisseur analogique vers numérique) . #! Analogue Sensors On The Raspberry Pi Using An MCP3008. The Raspberry Pi has no built in analogue inputs which means it is a bit of a pain to use many of the available sensors.

I wanted to update my garage security system with the ability to use more sensors so I decided to investigate an easy and cheap way to do it. The MCP3008 was the answer. The MCP3008 is a 10bit 8-channel Analogue-to-digital converter (ADC). It is cheap, easy to connect and doesn’t require any additional components. It uses the SPI bus protocol which is supported by the Pi’s GPIO header. This article explains how to use an MCP3008 device to provide 8 analogue inputs which you can use with a range of sensors.

Here are the bits I used : Raspberry PiMCP3008 8 channel ADCLight dependent resistor (LDR)TMP36 temperature sensor10 Kohm resistor The first step is enabling the SPI interface on the Pi which is usually disabled by default. Please follow my Enabling The SPI Interface On The Raspberry Pi article to setup SPI and install the SPI Python wrapper. Circuit More Information. Capteur à ultrasons URM37 V3.2 [SEN0001] - 13,80€ Le capteur URM37 V3.2 est idéal pour les applications nécessitant de réaliser une mesure entre des objets mobiles ou fixes. Ses applications en robotique sont très populaires mais ce capteur est aussi utile dans les systèmes de sécurité et là où il n'est pas possible d'utiliser les infrarouges.

Une pulsation ultrasonique est envoyée et le temps de réflexion est mesuré, déterminant ainsi la distance. Ce capteur utilise un processeur industriel AVR comme unité centrale et comporte un capteur de température, ce qui est unique dans ce type de capteur. Il existe trois mode de sortie configurables par cavaliers: PWM, RS232 et TTL ce qui permet d'utiliser ce capteur avec tout système électronique. Une sortie de contrôle pour servomoteur est disponible et peut être utilisée pour le balayage. Caractéristiques Documentation Contenu 1 x Capteur à ultrasons URM37 V3.2. Capteur de distance IR Sharp GP2Y0A710K (100-550cm) [SEN0085] - 30,80€ Télémètre ultrason "SRF10" Accueil | LabFab Rennes - Projet de laboratoire de fabrication francophoneLabFab Rennes – Projet de laboratoire de fabrication francophone.

BeagleBone Black. Platine "BeagleBone Black" (Rev. A6A) Texas Instruments Android Development Kit.