background preloader

Read

Facebook Twitter

StumbleUpon: 21 Questions to Ask Yourself Before Submi. Question mark is a CC image by Marco Belucci With the meteoric rise of Twitter and Facebook the social discovery service StumbleUpon got almost forgotten. Many people still use it for casual browsing though. It’s very difficult to use SU for business though unless you use the StumleUpon URL shortener. Any other business use is basically prohibited, especially self promotional stumbling. While you would assume that at least the private way of using the StumbleUpon system is in a way desired by SU I was amazed to find out that the just for fun stumbles I submitted rarely got popular.

Popular SU submissions still get substantial traffic in the thousands which can work financially if you have CPM (cost per mille, paid per 1000 impressions) ads on your blog or publication. Many stumblers by now send out their submissions to all of their followers to spread the word. This can get quite annoying if the same person sends such “shouts” daily and even mediocre content gets send out.

Popular audio books. Introduction to RDFa. RDFa (“Resource Description Framework in attributes”) is having its five minutes of fame: Google is beginning to process RDFa and Microformats as it indexes websites, using the parsed data to enhance the display of search results with “rich snippets.” Yahoo! , meanwhile, has been processing RDFa for about a year. With these two giants of search on the same trajectory, a new kind of web is closer than ever before.

The web is designed to be consumed by humans, and much of the rich, useful information our websites contain, is inaccessible to machines. People can cope with all sorts of variations in layout, spelling, capitalization, color, position, and so on, and still absorb the intended meaning from the page. Machines, on the other hand, need some help. A new kind of web—a semantic web—would be made up of information marked up in such a way that software can also easily understand it. Improved search#section1 Adding machine-friendly data to a web page improves our ability to search. Yahoo! Feature Column from the AMS. As we'll see, the trick is to ask the web itself to rank the importance of pages... Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone.

You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it? Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. One way to determine the importance of pages is to use a human-generated ranking. . . If. The Anatomy of a Search Engine.

Sergey Brin and Lawrence Page {sergey, page}@cs.stanford.edu Computer Science Department, Stanford University, Stanford, CA 94305 Abstract In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. 1. (Note: There are two versions of this paper -- a longer full version and a shorter printed version. 1.1 Web Search Engines -- Scaling Up: 1994 - 2000 Search engine technology has had to scale dramatically to keep up with the growth of the web. 1.2.

Creating a search engine which scales even to today's web presents many challenges. These tasks are becoming increasingly difficult as the Web grows. 1.3 Design Goals 1.3.1 Improved Search Quality Our main goal is to improve the quality of web search engines. 1.3.2 Academic Search Engine Research Aside from tremendous growth, the Web has also become increasingly commercial over time. 2. 2.1 PageRank: Bringing Order to the Web 2.1.1 Description of PageRank Calculation Vitae. Down and Dirty: Write Your Own URL Rewrite. We all know by this time about the benefits of converting your parameterized URLs to human- and crawler-friendly URLs, but the stock tools of the trade (ISAPI_Rewrite, mod_rewrite, etc.) don't necessarily scale all that well when you have a large number of categories, product pages, etc.

I'm going to walk you through what it takes to code this yourself, and I think you'll find it's less scary and complex than you thought, and gives you a number of benefits in terms of ongoing maintenance, flexibility, etc. Overview The core problem: your site uses parameter-happy URLs, but for SEO and user-friendliness you're dreaming of semi-readable URLs instead. You've got a lot of content, mostly coming from the database. The number of products, categories, subcategories, etc. mean the prospect of trying to create (and maintain!) For clarity (and because it's Case Study month at SEOmoz :-) we'll use my honeymoon registry and travel site, www.thebigday.com, for our examples. You want to 301 redirect.