background preloader

Spam

Facebook Twitter

Mail

Main / HomePage. Spam's Big Bang! Cable-TV descramblers! FDA-approved diet pills! Viagra without a prescription! Instant access to XXX movies! Dramatically enhanced orgasms! If you have ever received e-mails advertising products and services like these — some quite within the law, some clearly outside it — chances are they came from a guy like Howard Carmack, professional spammer.

Using three computers and working out of his mother's home in Buffalo, N.Y., Carmack sent an impressive 857,500,000 unsolicited e-mails in one year, something that is perfectly legal in New York State. EarthLink took notice and began a year-long cat-and-mouse game to discover Carmack's true identity. And no wonder. Why do spammers flood the Internet with ads nobody wants to read?

But for an increasing number of Hirsch's imitators, spamming is a numbers game that rewards excess. Spoofed or otherwise, the spam that makes it to your In box is just the tip of the iceberg. Stopping spambots with hashes and honeypots. Spam sucks. Any site which allows unauthenticated users to submit forms will have a problem with spamming software (spambots) submitting junk content. A common technique to prevent spambots is CAPTCHA, which requires people to perform a task (usually noisy text recognition) that would be extremely difficult to do with software.

But CAPTCHAs annoy users, and are becoming more difficult even for people to get right. Rather than stopping bots by having people identify themselves, we can stop the bots by making it difficult for them to make a successful post, or by having them inadvertently identify themselves as bots. This removes the burden from people, and leaves the comment form free of visible anti-spam measures. This technique is how I prevent spambots on this site. By watching how spammers fail to create spam on my site, there seem to be three different types of spam creators: Playback spambots, form-filling spambots, and humans. These are actual people using your form. All Wikipedia Links Are Now NOFOLLOW. I blogged about the Wikipedia Issues with SPAM and the discussions about the use of NOFOLLOW for ALL external Links from Wikipedia. It was done, finally. As of now are all outbound links from the english Wikipedia Site using the NOFOLLOW attribute, no exceptions. No matter where you place it, Article Page, Talk Page, User Page, Project Page, whatever.

No Link will get any credit at the major search engines. This will not eliminate SPAM at Wikipedia, but it will over time certainly reduce it a bit. Especially the spam of invisible pages that have virtually no traffic but at least some PageRank is now virtually a waste of time for any spammer. Spamming of areas with traffic was futile already without the NOFOLLOW attribute in place, since Editors remove the SPAM within hours or even minutes after it happened. There were numerous detailed discussions about the PROs and CONs of the use of NOFOLLOW. Update 2: There seems to be some need to explain what NOFOLLOW is and where it comes from. The big Digg rig | CNET News.com. Preventing comment.

If you're a blogger (or a blog reader), you're painfully familiar with people who try to raise their own websites' search engine rankings by submitting linked blog comments like "Visit my discount pharmaceuticals site. " This is called comment spam, we don't like it either, and we've been testing a new tag that blocks it. From now on, when Google sees the attribute (rel="nofollow") on hyperlinks, those links won't get any credit when we rank websites in our search results. This isn't a negative vote for the site where the comment was posted; it's just a way to make sure that spammers get no benefit from abusing public areas like blog comments, trackbacks, and referrer lists. We hope the web software community will quickly adopt this attribute and we're pleased that a number of blog software makers have already signed on: We've also discussed this issue with colleagues at our fellow search engines and would like to thank MSN Search and Yahoo!

Q: How does a link change? Got more questions? The SPF Setup Wizard. Form based record testers E-mail based record testers We provide an e-mail based record tester. Send an e-mail to spf-test@openspf.net. Your message will be rejected (this is by design) and you will get the SPF result either in your MTA mail logs or via however your MTA reports errors to message senders (e.g. a bounce message). This is done to avoid the risk of backscatter from the tester. This test tests both MAIL FROM and HELO and provides results for both.It uses the Python SPF (pySPF) library, which is fully compliant with the SPF specification. Form based TXT record viewer For implementors Test Suite - not a tool like others on this page, but maybe what you are looking for. The retired SPF record wizard For a long time we had a wizard utility (at to assist domain owners with creating SPF records for their domains. For this reason the wizard has been taken down.

Sugarplum -- spam poison. Sugarplum is an automated spam-poisoner. Its purpose is to feed realistic and enticing, but totally useless or hazardous data to wandering address harvesters such as EmailSiphon, Cherry Picker, etc. The idea is to so contaminate spammers' databases as to require that they be discarded, or at least that all data retrieved from your site (including actual email addresses) be removed. Sugarplum employs a combination of Apache's mod_rewrite URL rewriting rules and perl code. It combines several anti-spambot tactics, includling fictitious (but RFC822-compliant) email address poisoning, injection with the addresses of known spammers (let them all spam each other), deterministic output, and "teergrube" spamtrap addressing.

Sugarplum tries to be very difficult to detect automatically, leaving no signature characteristics in its output, and may be grafted in at any point in a webserver's document tree, even passing itself off as a static HTML file. News.