background preloader

Memcached: a distributed memory object caching system

Memcached: a distributed memory object caching system
Related:  Linux and systems

Networking HP PPA DeskJet Printers using SAMBA.: Emulating a Pos Next PreviousContents 4. Emulating a PostScript printer on a Windows Host. If you do not have commercial PostScript emulation software for Windows that will work with your HP PPA DeskJet (the author is unaware of any such software that supports PPA printers), you can use Ghostscript together with HP's native Windows drivers. 4.1 Installing Ghostscript as the emulation software. From the Ghostscript home page download and install (in this order) the Windows packages of Ghostscript (PostScript Emulation Software). 4.2 Adding the fictitious Postscript printer. In the following, I assume your printer is a HP DeskJet 722C, and is installed with its native Windows Drivers as a printer called "HP DeskJet 720C Series". The following instructions are tested on Windows 98, and may differ on other Windows variants. Open the Settings/Printers folder. First check that the HP PPA Deskjet is correctly installed, using HP's native Windows drivers. Next select the Details tab.

BizTalk Caching Pattern UPDATED: 24th August 2007, to reflect Richard Seroter's comment There is no necessity to explain the importance of caching in any server based developments like BizTalk, ASP .NET etc. You need to plan early in your development cycle to cache resources in order to make most out of your limited server resources. In BizTalk applications, it's quite common to lookup a data store to pickup a value, at different stages like custom Adapters, Pipelines, Orchestration, etc. Due to the nature of BizTalk architecture and behavior of .NET run-time, its very easy to implement a caching logic just with a static class and a static method as shown below: In .NET static variables are maintained per Common Language Runtime (CLR) "AppDomain". Inside the BizTalk server host instances, several subservices will be running. The BizTalk host instance simply acts as a container to host these other services. Here is an example using Orchestration: The message assignment shape got the following lines of code. Nandri!

spymemcached A simple, asynchronous, single-threaded memcached client written in java. Efficient storage of objects. General serializable objects are stored in their serialized form and optionally compressed if they meet criteria. Note - as of the 2.9 (and 2.10) series, artifacts are published to Maven Central and the groupId has changed from "spy" to "net.spy": Grab the new artifacts from here: Please use at least 2.10.2 from the 2.10 series since it fixes some issues for the new 2.10.0 and 2.10.1 features! Version 2.10.3 was released on 2 Dev 2013 Version 2.10.2 was released on 5 Nov 2013 Version 2.10.1 was released on 11 Oct 2013 Version 2.10.0 was released on 5 Sept 2013 Version 2.9.1 was released on 4 July 2013 Version 2.9.0 was released on 4 June 2013 Please note that google code doesn't allow file uploads starting 2014. YourKit is kindly supporting the spymemcached open source project with its full-featured Java Profiler.

JCS - Java Caching System Java Caching System JCS is a distributed caching system written in java. It is intended to speed up applications by providing a means to manage cached data of various dynamic natures. Like any caching system, JCS is most useful for high read, low put applications. Latency times drop sharply and bottlenecks move away from the database in an effectively cached system. The JCS goes beyond simply caching objects in memory. JCS 2.0 works on JDK versions 1.6 and up. JCS is a Composite Cache The foundation of JCS is the Composite Cache, which is the pluggable controller for a cache region. The JCS jar provides production ready implementations of each of the four types of caches. LRU Memory Cache The LRU Memory Cache is an extremely fast, highly configurable memory cache . Indexed Disk Cache The Indexed Disk Cache is a fast, reliable, and highly configurable swap for cached data. JDBC Disk Cache The JDBC Disk Cache is a fast, reliable, and highly configurable disk cache. TCP Lateral Cache

MountableHDFS [machine1] ~ > df -kh /export/hdfs/ Filesystem Size Used Avail Use% Mounted on fuse 4.1P 642T 3.5P 21% /export/hdfs [machine1] ~ > ls /export/hdfs/ home tmp Trash user usr var These projects (enumerated below) allow HDFS to be mounted (on most flavors of Unix) as a standard file system using the mount command. Once mounted, the user can operate on an instance of hdfs using standard Unix utilities such as 'ls', 'cd', 'cp', 'mkdir', 'find', 'grep', or use standard Posix libraries like open, write, read, close from C, C++, Python, Ruby, Perl, Java, bash, etc. All, except HDFS NFS Proxy, are based on the Filesystem in Userspace project FUSE ( Although the Webdav-based one can be used with other webdav tools, but requires FUSE to actually mount. Note that a great thing about FUSE is you can export a fuse mount using NFS, so you can use fuse-dfs to mount hdfs on one machine and then export that using NFS. Projects Supported Operating Systems Fuse-DFS Contributing 2.

ESX How To VMware ESX 2.1/5 Server: Beyond the Manual Document Version 1.6 By Mike Laverick © RTFM Education For Errors/Corrections please contact: mikelaverick@rtfm-ed.co.uk Table Of Contents Introduction . 4 Virtual Disk Management 5 Create Writable Floppy Images 5 Sample Script to Import a Disk Template (with simple error checking) 5 Convert a WS/GSX VMDK disk from IDE to SCSI 7 Reading a Virtual Disk From Windows (DiskMount) 9 Offline Backup Of VM . 10 Online Backups of VM (Redo File) 15 Compressing Virtual Disks & Disk Templates 17 Switching from Bus Logic to LSI Logic Controller 19 Physical Disk Management 28 Spanning VMFS Partitions 28 Mount an Existing File System/Partition from the ESX Server 28 Unattended Installation of ESX . 30 Create an Unattended Installation (Network) 30 Clone ESX Server with Symantec Ghost 41 Changing the Service Consoles IP/SM . 42 Changing the Service Consoles Default Gateway . 43 Changing Service Consoles Hostname . 43 Changing Service Consoles DNS . 44 Updating Hosts File . 44 Note :

Algorithms 101 - How to eliminate redundant cache misses in a distributed cache I'm going to jump right in to a fairly complex subject, for background check out my recent Distributed Cache Webcast here: Online Training. In the Webcast, I wrote a simple caching service that fronts a simple "GetService" method. I define "GetService" as some service that, for a given key, can retrieve a given value. Pretty basic stuff, if you have ever implemented a SQL query, a Web Service client, or something similar, you can probably envision the implementation. The interface looks like this: public interface GetService<K, V>{ public V get(K k);} The Typical Solution Here's where it gets interesting. sub get_foo_object { my $foo_id = int(shift); my $obj = $::MemCache->get("foo:$foo_id"); return $obj if $obj; $obj = $::db->selectrow_hashref("SELECT .... The problem with this approach is that it simply ignores the race conditions that happens when you have a cache miss. To solve the race, we have to fix a couple of issues... Step 1 - Change to First Writer Wins No. Sounds easy right?

A Bunch of Great Strategies for Using Memcached and MySQL Better The primero recommendation for speeding up a website is almost always to add cache and more cache. And after that add a little more cache just in case. Memcached is almost always given as the recommended cache to use. What we don't often hear is how to effectively use a cache in our own products. MySQL hosted two excellent webinars (referenced below) on the subject of how to deploy and use memcached. The star of the show, other than MySQL of course, is Farhan Mashraqi of Fotolog. Fotolog, as they themselves point out, is probably the largest site nobody has ever heard of, pulling in more page views than even Flickr. What is Memcached? The first part of the first webinar gives a good overview of memcached. The rest of the first webinar is Farhan explaining in wonderful detail how they use memcached at Fotolog. Memcached and MySQL Go Better Together Write scale the database by sharding.

Intro to Caching,Caching algorithms and caching frameworks part A lot of us heard the word cache and when you ask them about caching they give you a perfect answer but they don’t know how it is built, or on which criteria I should favor this caching framework over that one and so on, in this article we are going to talk about Caching, Caching Algorithms and caching frameworks and which is better than the other. The Interview: "Caching is a temp location where I store data in (data that I need it frequently) as the original data is expensive to be fetched, so I can retrieve it faster. That what programmer 1 answered in the interview (one month ago he submitted his resume to a company who wanted a java programmer with a strong experience in caching and caching frameworks and extensive data manipulation) Interviewer: Nice and based on what criteria do you choose your caching solution? Programmer 1 :huh, (thinking for 5 minutes) , mmm based on, on , on the data (coughing…) Interviewer: excuse me! Programmer 1: data?! Programmer 1: capacity? What is Cache?

BOO - Home memcached Basics for Rails Many speedy sites use memcached to save the results of expensive database queries and intense rendered templates. This is a basic introduction to using memcached with Rails. Thanks to Eric Hodel and Layton Wedgeworth who have answered many questions. Yes, there is a hash in the sky Memcached is a lightweight server process that stakes out a fixed amount of memory and makes it available as a quick access object cache. Some of the things you can do with it are: Automatically cache a row from the database as an ActiveRecord object Explicitly render_to_string and save the results (works well for tag clouds) Manually store a complicated database query and retrieve it later I thought there was some kind of fancy voodoo happening, but it turns out that it’s basically just a hash! Objects are serialized using Marshal, so it’s very fast. Installation on Mac OS X I first installed the memcached server using DarwinPorts but each query was taking five seconds to answer. Install cached_model Try it out

A peek at memcached&#039;s implementation I am a huge fan of memcached and we use it a lot on Plurk . Why to like memcached: it's a very simple protocol and supported for a lot of languages it's used by web-giants ( Facebook has now 25 terabytes of memcached cache ) it performs really well I have looked lightly into the internals of memcached to find out how it does its magic and what it makes it such an amazing choice for caching. External libraries used memcached uses these external libraries: libevent : libevent is used to provide non-blocking IO. Bob Jenkins's hash function is used for hashing How memcached manages memory Memcached does not use malloc/free for memory management, but a manual memory manager (implemented in slabs.c). The primary goal of the slabs subsystem in memcached was to eliminate memory fragmentation issues totally by using fixed-size memory chunks coming from a few predetermined size classes (early versions of memcached relied on malloc()'s handling of fragmentation which proved chunks of this size. memory).

The Role of Caching in Large Scale Architecture | Architects Zo We Recommend These Resources Pre-Internet, lots of systems were built without caches. The need to scale has led to the widespread deployment of caching. Why does Caching Work? Requests for data are not randomly distributed. If requests for data were entirely random it would be hard to cache a subset of it. Figure 1: Pareto Distribution If you are in doubt take a look at your own systems and create a chart of the frequency of data of a given type. These observations allow us to create hierarchical approaches where we try to match frequency of use to the speed of access of the cache and the capacity of the cache. It is useful to use the example of computer hardware. Data is often written once and read many times This is known as the read-write ratio. The cache hit ratio is improved by holding more data in the cache, and by holding on to the data for longer periods. Stale Data is often acceptable Take Google search as an example. Reasons to Cache The first reason to cache is performance. References

Related:  performancePHP