Bloom filter
Bloom proposed the technique for applications where the amount of source data would require an impracticably large hash area in memory if "conventional" error-free hashing techniques were applied. He gave the example of a hyphenation algorithm for a dictionary of 500,000 words, out of which 90% follow simple hyphenation rules, but the remaining 10% require expensive disk accesses to retrieve specific hyphenation patterns. With sufficient core memory, an error-free hash could be used to eliminate all unnecessary disk accesses; on the other hand, with limited core memory, Bloom's technique uses a smaller hash area but still eliminates most unnecessary accesses. For example, a hash area only 15% of the size needed by an ideal error-free hash still eliminates 85% of the disk accesses (Bloom (1970)). More generally, fewer than 10 bits per element are required for a 1% false positive probability, independent of the size or number of elements in the set (Bonomi et al. (2006)). . . . as before.
Ctrie
Not to be confused with C-trie. Operation[edit] The Ctrie data structure is a non-blocking concurrent hash array mapped trie based on single-word compare-and-swap instructions in a shared-memory system. It supports concurrent lookup, insert and remove operations. Keys are inserted by doing an atomic compare-and-swap operation on the node which needs to be modified. The figure above illustrates the Ctrie insert operation. The Ctrie is defined by the pointer to the root indirection node (or a root I-node). A C-node is a branching node. bit = bmp & (1 << ((hashcode >> level) & 0x1F)) pos = bitcount((bit - 1) & bmp) Note that the operations treat only the I-nodes as mutable nodes - all other nodes are never changed after being created and added to the Ctrie. Below is an illustration of the pseudocode of the insert operation: def insert(k, v) r = READ(root) if iinsert(r, k, v, 0, null) = RESTART insert(k, v) Advantages of Ctries[edit] Problems with Ctries[edit] Implementations[edit] References[edit]
Why Bloom filters work the way they do
Imagine you’re a programmer who is developing a new web browser. There are many malicious sites on the web, and you want your browser to warn users when they attempt to access dangerous sites. For example, suppose the user attempts to access You’d like a way of checking whether domain is known to be a malicious site. An obvious naive way is for your browser to maintain a list or set data structure containing all known malicious domains. In this post I’ll describe a data structure which provides an excellent way of solving this kind of problem. Most explanations of Bloom filters cut to the chase, quickly explaining the detailed mechanics of how Bloom filters work. In this post I take an unusual approach to explaining Bloom filters. Of course, this means that if your goal is just to understand the mechanics of Bloom filters, then this post isn’t for you. A stylistic note: Most of my posts are code-oriented. of objects. is a member of . ? , instead of the full objects.
Detecting similar and identical images using perseptual hashes - Hacker Labs
Couple of my hobbies are travelling and photography. I love to take pictures and experiment with photography. Usually after my trips, I just copy the photos to either my iPad or couple of my external hard disks. After 10 years, I have over 200K photos distributed across several disks and machines. I had to find a way to organize these photos and create a workflow for future maintenance. First, I needed to find out what exactly is a duplicate image. Identical images: There were multiple copies of the same photo in different directories with different names.Similar images: I usually bracket (exposure compensate or flash compensate) important pictures. Identical photos (1) are easy to find. Finding similar photos (2) is a little more challenging. Exposure or flash compensated, so photos were slightly lighter/darker than the original.A small difference in the point of focus or a slight change in point of view.Photos with changed aspect ratio.Resized photos, etc. Image similarity algorithms
Operational transformation
Operational transformation (OT) is a technology for supporting a range of collaboration functionalities in advanced collaborative software systems. OT was originally invented for consistency maintenance and concurrency control in collaborative editing of plain text documents. Two decades of research has extended its capabilities and expanded its applications to include group undo, locking, conflict resolution, operation notification and compression, group-awareness, HTML/XML and tree-structured document editing, collaborative office productivity tools, application-sharing, and collaborative computer-aided media design tools (see OTFAQ). History[edit] Operational Transformation was pioneered by C. System architecture[edit] Collaborative systems using OT typically adopt a replicated architecture for the storage of shared documents to ensure good responsiveness in high latency environments, such as the Internet. Basics[edit] Consistency models[edit] The CC model[edit] The CCI model[edit] T(ins(
pHash.org: Home of pHash, the open source perceptual hash library
Genetic Programming: Evolution of Mona Lisa | Roger Alsing Weblog
[EDIT] Added FAQ here: Gallery here: This weekend I decided to play around a bit with genetic programming and put evolution to the test, the test of fine art :-) I created a small program that keeps a string of DNA for polygon rendering. 0) Setup a random DNA string (application start) 1) Copy the current DNA sequence and mutate it slightly 2) Use the new DNA to render polygons onto a canvas 3) Compare the canvas to the source image 4) If the new painting looks more like the source image than the previous painting did, then overwrite the current DNA with the new DNA 5) repeat from 1 Now to the interesting part :-) Could you paint a replica of the Mona Lisa using only 50 semi transparent polygons? That is the challenge I decided to put my application up to. So what do you think? Like this: Like Loading...
Getting Cirrius: Calculating Similarity (Part 2): Jaccard, Sørensen and Jaro-Winkler Similarity
In my last post on calculating string similarity, I focused on one algorithm (Cosine Similarity) and some of the concepts involved in making its calculation. In this post, I want to introduce three more ways of calculating similarity. The three methods I present today are all a little less intensive to calculate than cosine similarity, and one in particular is probably more accurate. The three algorithms I will be presenting today are: Jaccard SimilaritySørensen SimilarityJaro-Winkler Similarity I'm going to present the implementation of each of these algorithms, and then at the end of the post, the results they return for a couple example sets of strings. Let's begin in order, starting with Jaccard Similarity. Jaccard Similarity is very easy to calculate and actually required no extra programming on my part to implements (all the utility functions required were created for the Cosine Similarity implementation). Here is my implementation of the Jaccard Similarity Coefficient: Resources: