background preloader

Deep Learning Framework

Deep Learning Framework

Audio Fingerprinting with Python and Numpy · Will Drevo Published November 15, 2013 The first day I tried out Shazam, I was blown away. Next to GPS and surviving the fall down a flight of stairs, being able to recognize a song from a vast corpus of audio was the most incredible thing I'd ever seen my phone do. After a few weekends of puzzling through academic papers and writing code, I came up with the Dejavu Project, an open-source audio fingerprinting project in Python. On my testing dataset, Dejavu exhibits 100% recall when reading an unknown wave file from disk or listening to a recording for at least 5 seconds. Following is all the knowledge you need to understand audio fingerprinting and recognition, starting from the basics. Music as a signal As a computer scientist, my familiarity with the Fast Fourier Transform (FFT) was only that it was a cool way to mutliply polynomials in O(nlog(n)) time. Music, it turns out, is digitally encoded as just a long list of numbers. Sampling Spectrograms Great. 1. 2. 3.

Caffe | Installation Prior to installing, have a glance through this guide and take note of the details for your platform. We install and run Caffe on Ubuntu 14.04 and 12.04, OS X 10.10 / 10.9 / 10.8, and AWS. The official Makefile and Makefile.config build are complemented by an automatic CMake build from the community. When updating Caffe, it’s best to make clean before re-compiling. Prerequisites Caffe has several dependencies. CUDA is required for GPU mode. Pycaffe and Matcaffe interfaces have their own natural needs. For Python Caffe: Python 2.7 or Python 3.3+, numpy (>= 1.7), boost-provided boost.python For MATLAB Caffe: MATLAB with the mex compiler. cuDNN Caffe: for fastest operation Caffe is accelerated by drop-in integration of NVIDIA cuDNN. CPU-only Caffe: for cold-brewed CPU-only Caffe uncomment the CPU_ONLY := 1 flag in Makefile.config to configure and build Caffe without CUDA. CUDA and BLAS Caffe requires the CUDA nvcc compiler to compile its GPU code and CUDA driver for GPU operation. Python Windows

NVIDIA Jetson TK1 - Caffe Deep Learning Framework - NVIDIA Jetson TK1 Dev Back in October 2014, Google’s Pete Warden wrote an interesting article: How to run the Caffe deep learning vision library on Nvidia’s Jetson mobile GPU board. At the time, I thought, “What fun!”. However, I noticed in the article that at the time there were issues with running Caffe on CUDA 6.5, which was just being introduced in LT4 21.1. After the holiday break, I realized a long enough period of time had passed that most of the issues would probably be worked out since we are now on LT4 21.2 and that I would be able to run Caffee in all its CUDA 6.5 goodness. In fact, I could, and with even better results than the original! Here’s the install script on Github: installCaffe.sh. NOTE: This installation was done on a clean system, which was installed using JetPack. NOTE (6-15-2015): Aaron Schumacher found an issue with some of the later versions of Caffe. NOTE (7-10-2015): Corey Thompson also adds: adjust the value from 1099511627776 to 536870912. So why is that an interesting topic?

GitHub - GemHunt/CoinSorter: Sorts coins by solenoid on a conveyor by classifying images with Caffe & DIGETS caffe.pdf NVIDIA Jetson TK1 - cuDNN install with Caffe example - NVIDIA Jetson TK1 Dev NVIDIA’s cuDNN is a GPU-accelelerated library of primitives for deep neural networks, which is designed to be integrated into higher-level machine learning frameworks, such as UC Berkeley’s Caffe deep learning framework software. In an earlier blog post, we installed Caffe on a Jetson TK1. Here’s a short video on how to install cuDNN and compile Caffe with cuDNN support. Looky here: Note As of this writing cuDNN requires CUDA 6.5. In order to install cuDNN, go to the NVIDIA cuDNN page and download the cuDNN libraries using your NVIDIA Developer account. $ tar -zxvf cudnn-6.5version.tgz $ cd cudnn-6.5version # copy the include file $ sudo cp cudnn.h /usr/local/cuda-6.5/include $ sudo cp libcudnn* /usr/local/cuda-6.5/lib The cuDNN libraries are placed into the cuda-6.5 library directory, a convenient place since CUDA 6.5 needs to be in your LD_LIBRARY_PATH. Installing Caffe is straightforward, here a Github gist for installation. At this point, edit Makefile.confg: $ gedit Makefile.config

Best Machine Learning Resources for Getting Started This was a really hard post to write because I want it to be really valuable. I sat down with a blank page and asked the really hard question of what are the very best libraries, courses, papers and books I would recommend to an absolute beginner in the field of Machine Learning. I really agonised over what to include and what to exclude. I had to work hard to put my self in the shoes of a programmer and beginner at machine learning and think about what resources would best benefit them. I picked the best for each type of resource. If you are a true beginner and excited to get started in the field of machine learning, I hope you find something useful. Programming Libraries I am an advocate of “learn just enough to be dangerous and start trying things”. This is how I learned to program and I’m sure many other people learned that way too. Find a library and read the documentation, follow the tutorials and start trying things out. Video Courses Overview Papers Beginner Machine Learning Books

CUDA CUDA(Compute Unified Device Architecture,统一计算架构[1])是由NVIDIA所推出的一種整合技術,是該公司對於GPGPU的正式名稱。透過這個技術,使用者可利用NVIDIA的GeForce 8以後的GPU和較新的Quadro GPU进行计算。亦是首次可以利用GPU作為C-编译器的开发环境。 概要[编辑] Example of CUDA processing flow 1. 以GeForce 8800 GTX为例,其核心擁有128个内处理器。 GeForce 8800 GTX显示卡的运算能力可达到520GFlops,如果建設SLI系统,就可以达到1TFlops。 但程序员在利用CUDA技術時,須分開三种不同的存储器,要面對繁复的线程层次,编译器亦无法自动完成多数任务,以上問題就提高了开发难度。 目前,已有軟體廠商利用CUDA技術,研發了一個Adobe Premiere Pro的插件。 在NVIDIA收購AGEIA後,NVIDIA取得相關的物理加速技術,即是PhysX物理引擎。 為了將CUDA推向民用,NVIDIA會舉行一系列的編程比賽,要求参赛者開發程式,充分利用CUDA的計算潛能。 2008年8月,NVIDIA推出CUDA 2.0[7]。 CUDA是一種由NVIDIA提出的並由其製造的圖形處理單元(GPUs)實現的一種平行計算平臺及程式設計模型。 使用CUDA技術,GPUs可以用來進行通用處理(不僅僅是圖形);這種方法被稱為GPGPU。 軟體發展者可以通過CUDA加速庫,編譯器指令(如OpenACC)以及符合工業標準的程式設計語言(如C,C++和Fortran)擴展對CUDA平臺進行操作。 在電腦遊戲行業中,GPUs不僅用於進行圖形渲染,而且用於遊戲物理運算(物理效果如碎片、煙、火、流體),比如PhysX和Bullet。 CUDA同時提供低級API與高級API。 優點:[编辑] 在GPUs(GPGPU)上使用圖形APIs進行傳統通用計算,CUDA技術有下列幾個優點: 分散讀取——代碼可以從記憶體的任意位址讀取統一虛擬記憶體(CUDA 6)共用記憶體——CUDA公開一個快速的共用存儲區域(每個處理器48K),使之在多個進程之間共用。 限制:[编辑] CUDA不支援完整的C語言標準。 應用[编辑] 支援的產品[编辑] Example[编辑] 参考文献[编辑] 相關條目[编辑]

Le Deep Learning pas à pas : l'implémentation L’engouement actuel pour le Deep Learning ne repose pas sur les seules avancées conceptuelles de Hinton et al. mais aussi sur des avancées technologiques. Après l’introduction aux concepts présentés dans la partie I de cet article, nous abordons ici les questions liées à l’implémentation de ces réseaux. L’enjeu majeur de la performance Pour un informaticien, l’implémentation d’un DBN repose principalement sur le calcul de la formule (7) Malgré les simplifications astucieuses obtenues en utilisant l’algorithme de Contrastive Divergence et en choisissant les RBM comme briques élémentaires, les expressions mathématiques à évaluer restent très coûteuses en temps de calcul. La question de la performance des algorithmes est donc vitale pour tout outil ou langage destiné à ce champ d’application. Les GPU ont des fréquences d’horloge bien moindres que celles des CPU (20x) mais ils possèdent de nombreux cœurs (unités de calcul). Figure 1: Architecture GPU vs CPU Caffe : une approche « packagée »

Caffe + Ubuntu 14.04 + CUDA 6.5 新手安装配置指南 - 信息学院嵌入式实验室 洋洋洒洒一大篇,就没截图了,这几天一直在折腾这个东西,实在没办法,不想用Linux但是,为了Caffe,只能如此了,安装这些东西,遇到很多问题,每个问题都要折磨很久,大概第一次就是这样的。想想,之后应用,应该还会遇到很多问题吧,不过没办法了,骑虎难下!!这里有个建议是,如果将来要做大数据集,最好事先给Linux留多点空间,比如Imagenet,估计500G都不为过。 这篇安装指南,适合零基础,新手操作,请高手勿要吐槽! 本文主要包含5个部分,包括: 第一部分 Linux安装 第二部分 nVidia驱动和CUDA Toolkit的安装和调试 第三部分 Caffe的安装和测试第四部分 Python安装和调试第五部分 Matlab安装和调试 PS:第四、五部分暂未完成,后面补上。 Linux的安装,如果不是Linux粉,只是必须,被迫要用它来作作科研什么的,建议安装成双系统,网上方法很多,这里我就不详细写了,安装还算是傻瓜式的,和windows的过程类似,至于语言,如果觉得难度还不够大的话,完全可以装E文版的,甚至日文,德文~~~,我是装的简体中文版,我总共用分出的100G的空间来安装Ubuntu 14.04,这个版本是最新的版本,有个好处是,可以直接访问Windows8.1的NTFS分区,不用做额外的操作,而且支持中文,例如:$ cd /media/yourname/分区名字/文件夹名,当然GUI就更方便了 我的分区设置如下: 根分区: \ 50G, Swap交换分区:16G ,这里,我设置和我的内存一样,据说小于16G的内存,就设置成内存的1.5-2倍 Home分区:剩余的34G 装好之后,重启电脑,有的人会直接进Linux,有的会直接Windows,谷歌或者百度解决方法吧,因为我也说不清这个具体怎么搞定。 我的台式机,搞定了,可是笔记本,把Windows分区也给破坏了,最后只能重装Windows 8.1,但因为笔记本没有nVidia的GPU所以不想再折腾了。 PS: 其实到现在感觉空间可能小了, 想想Imagenet 137G的训练文件,觉得应该把Home设置成300-500G以上,会更合适吧。 第二部分:nVidia驱动和CUDA Toolkit的安装和调试 一、Verify You Have a CUDA-Capable GPU $ gcc --version 关闭桌面服务: 1. a.

Related: