As you probably know from your own experiences people tend to have very different approaches when it comes to rating something. Even if they agree by trend in liking or disliking a resource, they might have very different ideas of how to express such an opinion. This makes it particularly hard to objectively compare two ratings - a problem that is known as inter-rater reliability.
For illustration consider some fictional (extreme) examples of BibSonomy users, rating a publication with 4.5 out of 5 possible stars:
- jock: Jock assigns top scores to everything he doesn't particularly dislike. His rating of a publication with only 4.5 out of 5 indicates that there must be something seriously wrong with it.
- g.rumpy: For him "I like it" means 2 out of 5 stars. Full score is not even an option and a score of 4.5 probably means, that it's the best stuff ever written.
- mr.normal: Well, Mr. Normal is very normal and so is his rating distribution.
The statistics box can be found on the discussed-posts-page of any user, e. g. here for user sdo: http://www.bibsonomy.org/discussed/user/sdo.
Each box shows the rating distribution, the total number of ratings and the rating average by this user.
Another such box can be found on the general discussed posts page http://www.bibsonomy.org/discussed.
Here, the statistics cover all ratings to any of the discussed publications or bookmarks displayed by any user.
Enjoy the new possibility to learn what other users might think about resources of your interest.