2009/01/18

Error

An interesting thing to feed a search engine is the word "error".

It's a very simple, single word.

You say, a good search engine will know how to handle all those "mysql error", "php error", "error page not found", etc etc.

Well, it's pretty interesting to see how well each behaves to that.

Some just give you in their top results "Error! Reason: File 'index.html was not found!" (right, this is the second result on a popular French engines). Some do it as their top result (Error. Reason: File "menu.asp" was not found!)

Others will play it on the safe side, displaying news that mentions error, the wikipedia error entry, then, the algorithm seeing that the remaining thousands of pages are wrong show only a handful of results.

I found one search engine that seemed less prone to this kind of troubles that the major search engine we know.

One the second hand, my search engine seems to be affected a lot later than the other search engines. That's suspicious, considering it's unlikely I'm the only one to run and try to fix that trouble. The real answer is... well, it doesn't show very well the content of the page, and where that "error" comes from.

Is this question relevant in any way ? It's the traditional "the written words don't really mean what the website is about".
Even if we are using the link text from web page to web page, there's going to be guys saying "this pages yields an error" or something like that.
One can say that, well, if enough guys say something different about the page, use what the majority is saying; true, but if you are looking for fast raising page per words...

I don't have an answer to that; it looks like I'm not the only one, and that's not really a comforting idea as to the future of search.

2009/01/06

Security concepts and an open source search engine

I was reading tonight a very long list of comments on how an ideal distributed opensource search engine could be.

The interesting things, reading the comments, is how it relates to security. Let me explain.

The main argument on why an opensource (even more so a distributed one) search engine can't work in practice is because when you know how the thing works, you can easily influence the results (ie: spam). And then people begin to praise the "security through obscurity" that the major search engines have: it's, according to them, the best way to preserve security.
No need to say, this is wrong. If it wasn't so, big companies wouldn't be spending money optimizing their ranking, especially if that wasn't working at all. Even if you consider the "moron factor", it's too easy to see if it's effective: run a search and see if you are on the first page.

So, obviously, even for ranking, security through obscurity doesn't work.

As a reminder, the most widely used library for secure communication, openssl, which source code is widely available, which encryption algorithm are know, isn't (officially at least), easily cracked. True, there's a lot of money involved in being on the major search engines first page, and people are desperate to get there. It's true too that brilliant guys do spend their days trying to break that openssl thing.

So, maybe that's one of the accurate goals for the next big search engine: a ranking algorithm that can't be diverted, even if you precisely know what the algorithm is.