background preloader

Raspberry Pi + OpenCV

Raspberry Pi + OpenCV
OpenCV is a suite of powerful computer vision tools. Here is a quick overview of how I installed OpenCV on my Raspberry Pi with debian6-19-04-2012. The guide is based on the official OpenCV Installation Guide on Debian and Ubuntu. Before you begin, make sure you have expanded your SD card to allow for the install of OpenCV. sudo apt-get -y install build-essential cmake cmake-qt-gui pkg-config libpng12-0 libpng12-dev libpng++-dev libpng3 libpnglite-dev zlib1g-dbg zlib1g zlib1g-dev pngtools libtiff4-dev libtiff4 libtiffxx0c2 libtiff-tools sudo apt-get -y install libjpeg8 libjpeg8-dev libjpeg8-dbg libjpeg-progs ffmpeg libavcodec-dev libavcodec53 libavformat53 libavformat-dev libgstreamer0.10-0-dbg libgstreamer0.10-0 libgstreamer0.10-dev libxine1-ffmpeg libxine-dev libxine1-bin libunicap2 libunicap2-dev libdc1394-22-dev libdc1394-22 libdc1394-utils swig libv4l-0 libv4l-dev python-numpy libpython2.6 python-dev python2.6-dev libgtk2.0-dev pkg-config cmake-gui .. makesudo make install python .

Webcam streaming with Raspberry Pi Details Details Last Updated on Wednesday, 06 March 2013 21:16 Here is a set of instructions for the installation and configuration of a Raspberry Pi to provide streaming video from a webcam. I set this up in preparation for my Nestcam project which hopefully appears on these pages in the following months. To capture video and snapshots from a webcam and stream them through a webserver I use Motion: "Motion is a program that monitors the video signal from cameras. [video4linux2 @ 0x8cb6c0] The v4l2 frame is 8316 bytes, but 153600 bytes are expected Motion can be setup in such a way that it only captures images and/or video when something moves in the camera's field of vision. To watch the stream in other browsers than Firefox you'll need to install a Java applet. Update below. Chrome (Windows) will display the first image, when using the IFRAME method, but it won't reload the stream. IE 9 can't deal with either method, apparently. disable unused services (e.g.

Autonomous car | Jeff's Inventions I recently gave a talk at embedded systems night at a Chicago hackerspace called Pumping Station One (www.pumpingstationone.org). In it, I explained why I chose to go with embedded vision for obstacle avoidance in my autonomous car and discussed options for overcoming the performance issues with embedded systems. Continue reading In the DARPA grand challenge, full-size, un-manned vehicles were tasked with following a route in the desert described by GPS coordinates ( I thought it would be interesting to do the same thing, but with a radio-controlled car (in my case, a Traxxas Slash 5803). Here is the current version of the car: To make the car autonomous, I needed to have the PLC: Resources As part of my autonomous car project ( To take control of the car’s steering and acceleration, I needed to understand the signals currently used to control the car and figure out how to emulate them with a micro-controller.

gpio - How can I control Lego motors? - Raspberry Pi Beta - Stack Exchange The standard Lego Mindstorms sensors are analogue (i.e. a voltage between 0-5?V), or digital (I²C or RS-485) (source). Analogue sensors: I don't think the Raspberry Pi has a broken out pin in the GPIO for an ADC (analogue to digital converter), so we can't interface with analogue sensors (without an extra microcontroller). Digital sensors: The Raspberry Pi does have two pins in the GPIO for I²C, which means that if you connect GND, +V, SDA and SCL to your sensors, you should be able to use an I²C library to talk to them. raspberrypi bootc # echo tmp102 0x48 > /sys/class/i2c-adapter/i2c-0/new_device raspberrypi bootc # sensors tmp102-i2c-0-48 Adapter: bcm2708_i2c.0 temp1: +21.6°C (high = +160.0°C, hyst = +150.0°C) Examples: There are a few articles on how to connect sensors and motors on this page, such as connecting a Mindstorm brick controller to an external microcontroller:

Streaming Your Webcam w/ Raspberry Pi | Wolf Paulus [Last updated on Feb. 2. 2013 for (2012-12-16-wheezy-raspbian) Kernel Version 3.2.27+] Three years ago, we bought two small Webcams and since we wanted to use them on Linux and OS X, we went with the UVC and Mac compatible Creative LIVE! CAM Video IM Ultra. This Webcam (Model VF0415) has a high-resolution sensor that lets you take 5.0-megapixel pictures and record videos at up to 1.3-megapixel; supported resolutions include 640×480, 1290×720, and 1280×960. If you like, you can go back and read what I was thinking about the IM Ultra, back in 2009. With the USB Camera attached to the Raspi, lsusb returns something like this: Using the current Raspbian “wheezy” distribution (Kernel 3.2.27+), one can find the following related packages, ready for deployment: luvcview, a camera viewer for UVC based webcams, which includes an mjpeg decoder and is able to save the video stream as an AVI file.uvccapture, which can capture an image (JPEG) from a USB webcam at a specified interval MJPG-streamer

List items Inspired by the amazing things the Boreatton Scouts group are doing with their Raspberry Pis, as well as a conversation with David Lamb and Andrew Attwood – two colleagues of mine at LJMU – I thought it was about time I actually tried to use my Pi for something other than recompiling existing software. I'm not a hardware person. Not at all. But I do have a Lego Mindstorms NXT robot which has always had far more potential than I've ever had the energy to extract from it. But after reading about how it's possible to control the NXT brick with Python using nxt-python, and with David pointing out how manifestly great it would be to get the first year undergraduates learning programming using it, I couldn't resist giving it a go. It turned out to be surprisingly easy. I'm not exactly sure why I bought such a huge lead given I knew it would all end up on top of the robot, but that's planning for you! The result really is as crazy and great as I'd hoped.

ROS on RaspberryPi - JR 8-Mar-2014 - ROS Hydro with rosserial_arduino This install is based almost exactly on this ROS wiki page (~ Nov 2013) by JonStephan, with only a few details changed. This install starts with the Raspberry Pi Foundation NOOBS v1.3.4 zip file I downloaded 7 March 2014. Select the Raspian OS for installation. Shortcut! Building this installation from source took a good fraction of my weekend, but it doesn't have to take your weekend. I have made a gzipped copy (as described here) of the 16GB SD card I used for this exercise up through the Arduino test. sha1sum is 3fce7acb04f002fc93d88edeafa2d2d87b65de7a This image does not automatically source the ROS setup script from .bashrc, so if you want this feature, do echo "source ~/ros_catkin_ws/install_isolated/setup.bash" >> .bashrc source .bashrc That's the shortcut; here's the long way. Install dependencies sudo apt-get install python-rosdep python-rosinstall-generator build-essential ROS Hydro install wstool rm roslisp rm -rf roslisp Rosserial install

Setting everything up for OpenCV – Raspberry Pi | D's Lab Log Setting OpenCV up on the Raspberry Pi took me two attempts and about 20 hours. I’m writing this guide so others don’t have to go through all the problems. I will mainly use OpenCV with Python; I haven’t tested it with C/C++. You will need: A Raspberry PiA 3gb or more SD Card with the Raspbian imageA working Internet connectionA way to see Rapsi’s desktop environment (for testing)Patience Optional: Attempt #1 – Following other guides Googling about OpenCV and Raspberry Pi leads to a few guides how to set it up (this and this). First of all, you need to install all dependencies. And that’s about it from the other guides that we can use. Attempt #2 – Figuring it out by myself So.. what now? wget It’s quite heavy; it will take some time to download. tar -xvjpf download (or whatever name it has on your machine) We go inside the new directory that we got with unpacking, and make a new one within it.

Through the Interface: Creating a motion-detecting security cam with a Raspberry Pi Part 1 As mentioned in these previous posts, I’ve been spending some time developing a social media-enabled security cam using a Raspberry Pi and a standard webcam. The eventual idea is that the security cam will check visitors against a database of photos of a homeowners’ friends extracted from Facebook. I have a lot of the needed “social” components in place – more on those in a future post – but I did just want to document some of the steps needed to create a functional security cam that simply uploads captured videos to Google Drive and sends an email with both a link to the video and an attached image frame (to make quick identification of the visitor easier, especially when reading the email on a mobile device). Most of the components needed for this were in place – and have been used to good effect in several other comparable projects out there – but I thought I’d just gather some key links in one place (some of which are repeated from last time): ll it when running in daemon mode.

NvidiaGraphicsDrivers Translation(s): English - Español - Français - Italiano - Русский - 简体中文 This page describes how to install the NVIDIA proprietary display driver on Debian systems. NOTE: For Apple systems, follow these steps first to prevent a black screen after installing the drivers: Identification The NVIDIA graphics processing unit (GPU) series/codename of an installed video card can usually be identified using the lspci command. For example: $ lspci -nn | grep VGA 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation G80 [GeForce 8800 GTS] [10de:0193] (rev a2) See HowToIdentifyADevice/PCI for more information. nvidia-detect The nvidia-detect script (nvidia-detect package in non-free) can also be used to identify the GPU and required driver: $ nvidia-detect Detected NVIDIA GPUs: 02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF108 [GeForce GT 430] [10de:0de1] (rev a1) Your card is supported by the default drivers. Drivers Installation 1. 2. 3. 4. 5.

InstallGuide : Debian Here you will get a step by step compilation guide for GNU/Linux Debian Squeeze and *Ubuntu. Why compiling ? You can not use the python-opencv package because he provide old python support (for more information, check this bug report .).On Ubuntu, python bindings won't work (check this bug report ) So you must use the compilation. There is also some more info on the Linux Install Guide for OpenCV . Prerequisites Package needed The package you will need can be installed using the following commands (on Debian Lenny): Notes: Not all those packages are necessarily needed. apt-get install libpython2.6 python-dev python2.6-dev # Only if you want to use python If your system has trouble with building "libjpeg.so", you may need to build it manually , or try this: apt-get install libjpeg-progs libjpeg-dev On Ubuntu 10.10 this apt-get was needed to recognize gstream-app and gstreamer-vid development headers apt-get install libgstreamer-plugins-base0.10-dev Getting the latest stable OpenCV version a.

Cross-compilation & Distributed compilation for the Raspberry Pi | Jeremy Nicola Introduction Compiling large programs such as a Linux Kernel, or big libraries like OpenCV, OpenNI directly on your Raspberry Pi will take a lot of time, and sometimes will even fail ( I was not able to reach more than 6% in the compilation process of the PCL, when compiling directly on the Pi ). In this howto I will show you how to: cross compile programs, i.e.: how you can compile a program on your PC so that it will run on your Raspberry Pidistribute the compilation so that when you compile a program from your Raspberry Pi it actually gets cross-compiled on your remote PC(s), in a totally transparent manner I am assuming that just as me, you're manipulating directly in your $HOME directory, both on your Raspberry Pi and your PC, running any Debian-flavored Linux distribution ( Debian, Linux Mint Debian Edition, Ubuntu... ). 1 - Install a toolchain To cross compile you have to set up a toolchain. build it yourselfuse the one provided by the Raspberry Pi Foundation A simple Hello world! . .

WebHome Welcome to the home of Motion, a software motion detector. Motion is a program that monitors the video signal from cameras. It is able to detect if a significant part of the picture has changed; in other words, it can detect motion. See more below. Documentation Download These unofficial versions exists that hopefully will soon be merged into the official project Support Development Motion Patches - contribute your modification and see what others have shared Related Projects (incl video4linux loopback device) Motion at Sourceforge - for file releases. What is Motion? Motion is a program that monitors the video signal from one or more cameras and is able to detect if a significant part of the picture has changed; in other words, it can detect motion.

OpenCV Installation Troubleshooting Guide « ozbotz.org August 18th, 2011 | Posted by Osman Eralp | - (Comments are closed) Software Tags: opencv, software, ubuntu [Revised 2011-08-29. Here is a list of error messages that are addressed by this guide: ERROR: libx264 not foundundefined reference to ‘x264_encoder_open_116′libv4lconvert-priv.h:25:21: fatal error: jpeglib.h: No such file or directorylibv4l1.c:53:28: fatal error: linux/videodev.h: No such file or directory/home/osman/src/opencv/OpenCV-2.3.0/modules/highgui/src/cap_ffmpeg_impl.hpp:492:13: error: ‘CODEC_TYPE_VIDEO’ was not declared in this scopelibv4lconvert: warning more framesizes then I can handle! Problems Building ffmpeg Problem<84> ... Solution Use ffmpeg version 0.7.x. Problem<84> ... To rebuild ffmpeg, change to the directory where you untared the ffmpeg source files, and enter the following commands: . This solution comes from Problems Building v4l Problem<37> ...

cmd-robot - This is Tom and Cian's robot code repository! The CMDrobot project incorporates all mechatronic engineering disciplines; mechanical, electronic, computer, software, control and systems design. The project founders are Tom Ingram and Cian Byrne. The aim of the CMDrobot project is to create a network of robots which can wirelessly communicate with base stations as well as mobile field agents. The robots are able to move about to provide real time and lapsed data back to CMDhq for use by various software clients. Information capable of being received includes video/audio feeds, GPS coordinates, nearby network information, detected hazards and on board component statuses. The project involves the construction of two "CMDrobots". OPSEC CLASSIFIED- Missiles currently beta testing Programming Languages Throughout the course of the project we have experimented and used a variety of programming languages including: Python Visual Basic C++ Bash Every language has a part in the project which is detailed below! Visual Basic handles: Python Bash Enjoy.

Related: