background preloader

Visualizing Algorithms

Visualizing Algorithms
The power of the unaided mind is highly overrated… The real powers come from devising external aids that enhance cognitive abilities. —Donald Norman Algorithms are a fascinating use case for visualization. To visualize an algorithm, we don’t merely fit data to a chart; there is no primary dataset. Instead there are logical rules that describe behavior. This may be why algorithm visualizations are so unusual, as designers experiment with novel forms to better communicate. But algorithms are also a reminder that visualization is more than a tool for finding patterns in data. #Sampling Before I can explain the first algorithm, I first need to explain the problem it addresses. Light — electromagnetic radiation — the light emanating from this screen, traveling through the air, focused by your lens and projected onto the retina — is a continuous signal. This reduction process is called sampling, and it is essential to vision. Sampling is made difficult by competing goals. Here’s how it works:

https://bost.ocks.org/mike/algorithms/

Related:  Modélisation*importants à revoirMath

A new digital ecology is evolving, and humans are being left behind This is an excellent point. You mean something similar to Valve's fee on steam's marketplace? They take 10% cut out of every transaction, no matter how big or small. 6174 (number) 6174 is known as Kaprekar's constant[1][2][3] after the Indian mathematician D. R. Kaprekar. Quantum Machine Learning Singularity from Google, Kurzweil and Dwave ? Dwave's 512 qubit system can speedup the solution of Google's machine learning algorithms by 50,000 times in 25% of the problem cases. This could make it the fastest system for solving Google's problems. Google and Dwave have been working on sparse coding, deep learning and unattended machine learning with Dwave's quantum computer helping to get better and faster results in some cases. Google research discusses the use of quantum computers for AI and machine learning.

Why Mathematicians Can’t Find the Hay in a Haystack The first time I heard a mathematician use the phrase, I was sure he’d misspoken. We were on the phone, talking about the search for shapes with certain properties, and he said, “It’s like looking for hay in a haystack.” “Don’t you mean a needle?” I almost interjected. Then he said it again. In mathematics, it turns out, conventional modes of thought sometimes get turned on their head.

Gestalten Looking through an atlas has always been a fascinating way to explore the world. Around the World is a contemporary evolution of an atlas tailored to our information age that takes its readers around the globe in 272 pages. In this book, compelling information graphics illustrate how natural and man-made phenomena impact our lives. They take us on an entertaining and informative journey that gives us a deeper, more intuitive understanding not only of our planet’s geography, but also of the key personal and global developments of the twenty-first century. Alongside classic facts about nature, history, population, culture, and politics, Around the World’s eye-catching information graphics clearly explain complex processes such as global trade and evolving demographics. The book gives added insight into our modern world through its visual exploration of topics such as the changing speed of travel.

How I Rewired My Brain to Become Fluent in Math - Nautilus - Pocket I was a wayward kid who grew up on the literary side of life, treating math and science as if they were pustules from the plague. So it’s a little strange how I’ve ended up now—someone who dances daily with triple integrals, Fourier transforms, and that crown jewel of mathematics, Euler’s equation. It’s hard to believe I’ve flipped from a virtually congenital math-phobe to a professor of engineering. One day, one of my students asked me how I did it—how I changed my brain. I wanted to answer Hell—with lots of difficulty! After all, I’d flunked my way through elementary, middle, and high school math and science.

An Introduction to Deep Learning (in Java): From Perceptrons to Deep Networks In recent years, there’s been a resurgence in the field of Artificial Intelligence. It’s spread beyond the academic world with major players like Google, Microsoft, and Facebook creating their own research teams and making some impressive acquisitions. Some this can be attributed to the abundance of raw data generated by social network users, much of which needs to be analyzed, the rise of advanced data science solutions, as well as to the cheap computational power available via GPGPUs. But beyond these phenomena, this resurgence has been powered in no small part by a new trend in AI, specifically in machine learning, known as “Deep Learning”. In this tutorial, I’ll introduce you to the key concepts and algorithms behind deep learning, beginning with the simplest unit of composition and building to the concepts of machine learning in Java. A Thirty Second Tutorial on Machine Learning

Future - The mysterious origins of an uncrackable video game “You and your team of archaeologists have fallen into the ‘catacombs of the zombies’.” A miserable situation, to be sure. But this was the chilling trial that faced players of Entombed, an Atari 2600 game, according to the instruction manual. The catacombs were an unforgiving place. A downward-scrolling, two-dimensional maze that players had to navigate expertly in order to evade the “clammy, deadly grip” of their zombie foes.

Related: