background preloader

Neural networks

Facebook Twitter

The Shogun API cookbook — Shogun-cookbook 5.0 documentation. GitHub - kpzhang93/MTCNN_face_detection_alignment: Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Neural Networks. Automatic transliteration with LSTM · YerevaNN. By Tigran Galstyan, Hrayr Harutyunyan and Hrant Khachatrian. Many languages have their own non-Latin alphabets but the web is full of content in those languages written in Latin letters, which makes it inaccessible to various NLP tools (e.g. automatic translation). Transliteration is the process of converting the romanized text back to the original writing system. In theory every language has a strict set of romanization rules, but in practice people do not follow the rules and most of the romanized content is hard to transliterate using rule based algorithms.

We believe this problem is solvable using the state of the art NLP tools, and we demonstrate a high quality solution for Armenian based on recurrent neural networks. We invite everyone to adapt our system for more languages. Contents Problem description Since early 1990s computers became widespread in many countries, but the operating systems did not fully support different alphabets out of the box. Data processing Source of the data. The Neural Network Zoo - The Asimov Institute. With new neural network architectures popping up every now and then, it’s hard to keep track of them all. Knowing all the abbreviations being thrown around (DCIGN, BiLSTM, DCGAN, anyone?) Can be a bit overwhelming at first. So I decided to compose a cheat sheet containing many of those architectures. Most of these are neural networks, some are completely different beasts. Though all of these architectures are presented as novel and unique, when I drew the node structures… their underlying relations started to make more sense.

One problem with drawing them as node maps: it doesn’t really show how they’re used. It should be noted that while most of the abbreviations used are generally accepted, not all of them are. Composing a complete list is practically impossible, as new architectures are invented all the time. For each of the architectures depicted in the picture, I wrote a very, very brief description. Rosenblatt, Frank. Broomhead, David S., and David Lowe. Hopfield, John J. Colorful Image Colorization. How to interpret the results Welcome! Computer vision algorithms often work well on some images, but fail on others. Ours is like this too. We believe our work is a significant step forward in solving the colorization problem. However, there are still many hard cases, and this is by no means a solved problem.

Some failure cases can be seen below and the figure here. This is partly because our algorithm is trained on one million images from the Imagenet dataset, and will thus work well for these types of images, but not necessarily for others. GitHub - Smorodov/Paddle: PArallel Distributed Deep LEarning.

Common

Convolutional networks.