background preloader

Technical expertize

Facebook Twitter

Articles. There is nothing wrong with rand per se, but there is a lot of confusion about what rand does and how it does it. The often results in rand being used in situations where it is a very poor choice. In general, rand should be avoided except in the most simple of cases where pseudorandom numbers are desired, and even then there are caveats. So the question is, what exactly is the problem with rand? Let us count the ways: Range: rand returns numbers in the range of [0, RAND_MAX), and RAND_MAX is specified with a minimum value of 32,767. Portable code cannot assume that the range is any larger than this, so portable code is very limited in the random numbers that can be predictably generated. Compare this with better random number generators that have a range of 32 or 64 bits. So when should you use rand? However, there is another wrench that can be thrown in our gears, because a great many applications need a smaller range than [0, 32,767).

Int r = rand() % N; All is well, right? Right? Shell And Wait. Shell And Wait This pages describes the VBA Shell function and provides a procedure that will wait for a Shell'd function to terminate. The VBA Shell function can be used to start an external program or perform any operation for which you would normally use the Run item on the Windows start menu. The Shell function starts the command text and then immediately returns control back to the calling VBA code -- it does not wait for the command used in Shell to terminate. This page describes a function named ShellAndWait that will call the Shell function and then wait for the Shell'd process to terminate or for a prescribed timeout interval to expire.

The ShellAndWait function calls Shell and then waits for the Shell'd process to terminate. You can specify an interval after which the function times out and returns to the caller. The declaration of ShellAndWait is as follows: In the declaration, ShellCommand is the string that is passed to the VBA Shell function. ShellCommand TimeOutMs BreakKey. Spreadsheet::WriteExcel is dead. Long live Excel::Writer::XLSX. | jmcnamara. Last week I released a new version of Excel::Writer::XLSX to CPAN that was 100% API compatible with Spreadsheet::WriteExcel. This marked a milestone as I am now able to move past WriteExcel's feature set and move on to new features that I wasn't able to support previously. This was achieved with 30 CPAN releases in almost exactly a year. By comparison, WriteExcel took almost 10 years. This gives a fair indication of the disparity of effort required to implement a feature in the pre-Excel 2007 binary xls format as opposed to the new XML based xlsx format.

So, from now on, all new features will go into Excel::Writer::XLSX and Spreadsheet::WriteExcel will be in maintenance mode only. The first of the new features, conditional formatting, was added yesterday. With Excel::Writer::XLSX you can now add a format like the following: $worksheet1->conditional_formatting( 'B3:K12', { type => 'cell', format => $light_red, criteria => '>=', value => 50, }); Bc Command Manual. An arbitrary precision calculator language version 1.06 Description bc [ -hlwsqv ] [long-options] [ ] bc is a language that supports arbitrary precision numbers with interactive execution of statements. There are some similarities in the syntax to the C programming language. A standard math library is available by command line option. If requested, the math library is defined before processing any files. bc starts by processing code from all the files listed on the command line in the order listed. This version of bc contains several extensions beyond traditional bc implementations and the POSIX draft standard.

The author would like to thank Steve Sommars (Steve.Sommars@att.com) for his extensive help in testing the implementation. Email bug reports to bug-bc@gnu.org. Command Line Options bc takes the following options from the command line: -h, --help Print the usage and exit. -l, --mathlib Define the standard math library. -w, --warn Give warnings for extensions to POSIX bc. -s, --standard - expr. Laser Diode. 2-2-2. Reliability Theory The well known bathtub curve describes the failure rate for most electrical devices (Fig. 2). In the initial stage, the failure ratio starts off high and decreases over time. After this in the random failure stage, the failure rate is constant. The last stage is the wearout period in which the failure rate steadily increases. The lifetime of equipment using semiconductors is much shorter than the devices themselves which usually do not reach the wearout stage. Through the years improvements in laser diode technology have extended average lifetime to a level of typical semiconductor devices.

When a photodetector such as a laser diode operates in the forward direction, the increase in current which does not contribute to light emission causes the light emission characteristics to change over time. Based on the relationship between the temperature and drive current rise ratio, the semiconductor laser in general is given as; The failure rate is as follows. Controlling brain cells with light. 21.01.15 - EPFL scientists have used a cutting-edge method to stimulate neurons with light. They have successfully recorded synaptic transmission between neurons in a live animal for the first time.

Neurons, the cells of the nervous system, communicate by transmitting chemical signals to each other through junctions called synapses. This “synaptic transmission” is critical for the brain and the spinal cord to quickly process the huge amount of incoming stimuli and generate outgoing signals. However, studying synaptic transmission in living animals is very difficult, and researchers have to use artificial conditions that don’t capture the real-life environment of neurons. Now, EPFL scientists have observed and measured synaptic transmission in a live animal for the first time, using a new approach that combines genetics with the physics of light.

Their breakthrough work is published in Neuron. Activating neurons with light Recording neuronal transmissions Reference Pala A, Petersen CCH. Farey sequence. Symmetrical pattern made by the denominators of the Farey sequence, F8. In mathematics, the Farey sequence of order n is the sequence of completely reduced fractions between 0 and 1 which, when in lowest terms, have denominators less than or equal to n, arranged in order of increasing size.

Each Farey sequence starts with the value 0, denoted by the fraction 0⁄1, and ends with the value 1, denoted by the fraction 1⁄1 (although some authors omit these terms). Examples[edit] The Farey sequences of orders 1 to 8 are : History[edit] The history of 'Farey series' is very curious — Hardy & Wright (1979) Chapter III[1] ... once again the man whose name was given to a mathematical relation was not the original discoverer so far as the records go. — Beiler (1964) Chapter XVI[2] Properties[edit] Sequence length and index of a fraction[edit] The Farey sequence of order n contains all of the members of the Farey sequences of lower orders. The middle term of a Farey sequence Fn is always 1/2, for n > 1. are . Neural networks and deep learning. The human visual system is one of the wonders of the world.

Consider the following sequence of handwritten digits: Most people effortlessly recognize those digits as 504192. That ease is deceptive. In each hemisphere of our brain, humans have a primary visual cortex, also known as V1, containing 140 million neurons, with tens of billions of connections between them. And yet human vision involves not just V1, but an entire series of visual cortices - V2, V3, V4, and V5 - doing progressively more complex image processing.

We carry in our heads a supercomputer, tuned by evolution over hundreds of millions of years, and superbly adapted to understand the visual world. The difficulty of visual pattern recognition becomes apparent if you attempt to write a computer program to recognize digits like those above. Neural networks approach the problem in a different way. And then develop a system which can learn from those training examples. Perceptrons What is a neural network? Is a shorthand. Quoc Le’s Lectures on Deep Learning | Gaurav Trivedi. Dr. Quoc Le from the Google Brain project team (yes, the one that made headlines for creating a cat recognizer) presented a series of lectures at the Machine Learning Summer School (MLSS ’14) in Pittsburgh this week. This is my favorite lecture series from the event till now and I was glad to be able to attend them. The good news is that the organizers have made available the entire set of video lectures in 4K for you to watch.

But since Dr. Le did most of them on the board and did not provide any accompanying slides, I decided to put the contents of the lectures along with the videos here. In this post I posted Dr. Le’s lecture videos and added content links with short descriptions to help you navigate them better. Lecture 1: Neural Networks Review Dr. Contents Lecture 2: NNs in Practice If you have already covered NN in the past then the first lecture may have been a bit dry for you but the real fun begins in this lecture when Dr. Lecture 3: Deep NN Architectures In this lecture, Dr. NYU Course on Deep Learning (Spring 2014) Deep Learning of Representations. #! Deep learning.

Branch of machine learning Deep learning (also known as deep structured learning or differential programming) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.[1][2][3] Deep learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.[4][5][6] Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems.

Definition[edit] Overview[edit] History[edit] Jitter. Jitter can be quantified in the same terms as all time-varying signals, e.g., root mean square (RMS), or peak-to-peak displacement. Also like other time-varying signals, jitter can be expressed in terms of spectral density (frequency content). Jitter period is the interval between two times of maximum effect (or minimum effect) of a signal characteristic that varies regularly with time. Jitter frequency, the more commonly quoted figure, is its inverse. ITU-T G.810 classifies jitter frequencies below 10 Hz as wander and frequencies at or above 10 Hz as jitter.[2] Jitter may be caused by electromagnetic interference (EMI) and crosstalk with carriers of other signals.

Jitter can cause a display monitor to flicker, affect the performance of processors in personal computers, introduce clicks or other undesired effects in audio signals, and loss of transmitted data between network devices. Sampling jitter[edit] Packet jitter in computer networks[edit] Compact disc seek jitter[edit] Types[edit] MMSIM 7 error connected with license file. DSP - Community Edition - Not for commericial use. MicroModeler DSP is a fast and efficient way to design digital filters. Use it to filter signals in the frequency domain for your embedded system MicroModeler DSP is Loading. Please wait.... Here's a list of features to keep you busy while it loads... IIR Filter Types: Butterworth Chebyshev Inverse Chebyshev Bessel Elliptic High pass Low pass Band pass Band stop FIR Filter Design Methods Frequency Sampling Equiripple (Parks-McClellan) FIR Filter Specifications: High pass Low pass Band pass Band stop Hilbert Transform Differentiator Standard Filter Types Comb Filter Moving Average Filter Lth Band and Half Band Raised Cosine Blank Filter (Add Poles and Zeros) Pole Types Real Pole Conjugate Poles All-Pass Poles Zero Types Real Zeros Conjugate Zeros Reciprocal Zero Quads Unit Zeros Multirate Filters: Polyphase FIR Interpolator design Polyphase FIR Decimator design.

Stockholm Junior Water Prize | Stockholm International Water Institute. 2014 Stockholm Junior Water Prize The 2014 Stockholm Junior Water Prize International Final takes place during the World Water Week in Stockholm, August 31 – September 5, 2014. The award ceremony will take place September 3 at Grand Hôtel in the heart of Stockholm. Good luck to all finalists! New prize sums for 2014 The youth is our future. SIWI enhances the support and encouragement for young people showing great achievements in water and environment by increasing the prize sum for Stockholm Junior Water Prize notably. As of 2014, the new prize sums will be: Stockholm Junior Water Prize winner – USD 15.000Stockholm Junior Water Prize winner’s school – USD 5.000 (new category)Diploma of Excellence, USD 3.000 Hot topics for the 2014 Stockholm Junior Water Prize Drought/floodingGround water issuesSalinizationPharmaceutical problems related to waterSanitation and hygiene Bringing Together the World’s Brightest Young Scientists What does it mean to participate in the competition?

Sponsors: Digital Filter Design, Writing Difference Equations For Digital Filters, a Tutorial. ApICS LLC. Digital Filter Design Writing Difference Equations For Digital Filters Brian T. Boulter © ApICS ® LLC 2000 Difference equations are presented for 1st, 2nd, 3rd, and 4th order low pass and high pass filters, and 2nd, 4th and 6th order band-pass, band-stop and notch filters along with a resonance compensation (RES_COMP) filter. The low pass, high pass, bandpass and bandstop difference equations are obtained from the normalized butterworth continuous time filter descriptions given below. 1st. order normalized Butterworth low pass filter: (1.0a) 2nd. order normalized Butterworth low pass filter: (1.0b) 3rd. order normalized Butterworth low pass filter: (1.0c) 4th. order normalized Butterworth low pass filter: (1.0d) 2nd. order normalized Notch filter 1.1a) 4th order normalized Notch filter (1.1b) 6th. order normalized Notch filter (1.1c) Normalized Res_Comp filter (1.1d) To map (1.0x) to a lowpass filter use: = desired low-pass 3 [d.b.] cutt-off frequency. (1.2) To map (1.0x) to a highpass filter use:

Finite impulse response. The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly N + 1 samples (from first nonzero element through last nonzero element) before it then settles to zero. FIR filters can be discrete-time or continuous-time, and digital or analog. Definition[edit] A direct form discrete-time FIR filter of order N. The top part is an N-stage delay line with N + 1 taps. Each unit delay is a z−1 operator in Z-transform notation. A lattice form discrete-time FIR filter of order N. For a causal discrete-time FIR filter of order N, each value of the output sequence is a weighted sum of the most recent input values: where: is the input signal, is the output signal, is the filter order; an th-order filter has terms on the right-hand side is the value of the impulse response at the i'th instant for of an th-order FIR filter.

This computation is also known as discrete convolution. The Properties[edit] Require no feedback. Fig. Fig. Infinite impulse response. Bending the Light with a Tiny Chip.