Interoperable interactive geometry for Europe
I forgot my login data

Report a bug

Fan club

Quick Intro Videos
click to start movie
Create A Simple
GeoGebra Resource (25Mb)
click to start movie
Filing a review
click to start movie
Find a Resource

This platform is brought to you by the intergeo project, funded under the eContent Plus programme of the European commission and by partners


$bclerc System for weighted reviews

Our approach is based on the system Yochai Benker describes in his book "The Wealth of Networks". It is only slightly modified to meet the requirements of i2geo. In this system every user has a karma value between one and four. A karma value of one describes a very bad karma while a value of four means the user has a very good karma. Only members of i2geo can create reviews. By restricting the ability to create reviews to the registered users, we are preventing users from giving themselves more than one review by logging out an posting as a guest. Of course we have to add restrictions for members too. It does not make sense to allow users to make more than one review for one resource. They can modify their existing review if they changed their mind about the resource though. A new member, who has not yet written a resource or has not received a review, has a karma value of two. If the average of his own resources' reviews is only one or two his karma value stays on two or decreases to one. On the other hand the karma value increase to three or four if the average of his reviews is three or four.

Every review from you on one of your own resources is weighted with a maximum value of 1. Also if you have a medium or good karma value, your review on own resources is counted fixed by one. Therefore nobody can push his own karma value up to 4 by giving himself good reviews for a long time. A review of a different user is either equally weighted or has a higher weight. Another way to prevent users from manipulating their karma value is to set a fix maximum count of reviews a user can write in a certain period of time. For example 3 reviews per day and 5 reviews in three days. This is not implemented yet.

Furthermore we should weigh detailed reviews higher than quick reviews. This could be achieved by reducing the weight by one if someone only clicks the major subjects. But this is too cursory, therefor we decided to compute a precisionvlaue of a review between zero and one and weight the karmavalue with this precision. It describes the precision by a percentage value of completeness. First we cut the whole review inputs into three parts. For example we want to set the weight higher if the review was created after a trial in a classroom, so the checkbox "This review conclude a trial in classroom" makes 10% of the whole reviewprecision. Then the nine major-items has a proportion of 40% and a detailed review by filling all subitems replace 50%. That means a quick review without a classroom trial has a precision of 40%, a detailed review without a classroom trial has a precision of 90% and only a detailed review after a classroom trail has 100%. Within one of these three packages the value is computed linear, for example there are 50 subitems so every item is 1% of the precision. This way, the review of a user with a high karma value has still a bigger weight as a review from the author or users with bad karma value, but detailed reviews get a higher part of the review summary. This also means, that a user who is creating a quick review on his own resources does only add to the summary value with a weight of 0.4 if his karma is two. There are a lots of ways to do this more fine-grained. For example we could set the weight down if it is a bad review with no comment. We want to weight helpful reviews more than only fullfilled reviews.

The review summary of the resource is computed by a weighted average. At the moment we are computing the single review value by a normal average of the chosen items by the user. Then we multiply this value with the karma value of the user, who has made this review and the precision of the review. All the results are summed up and then divided by the sum of users karma values multiplied with the precision.

At the moment this version of a weighted review system is running on draft. At is a good example, that you cannot push your own resources much. I gave my own resource a very good review, even with the highest precision. Paul gave me a bad review with only a precision of 35,55 %, but he is not the creator of the resource so his karma stays on two and mine is redued to one for the computation of the summary rank. That means the stars are computed by (0.3555*2*1+1*1*4)/(0.3555*2+1*1)=4,711/1,711=2,75.
My new Karma is evaluated by the average of all my reviewed resources. There are only two, both with a summary rank of two, so my karma stays on two.

An overview about the user marion
Reviews of Resource 1:
Review Review Author karma of the review athor precision
Reviews of Resource 2:
ReviewReview Authorkarma of the review athorprecision

The karma of marion is now: $objMarion.getProperty("karma").value
The user Benjamin Clerc Clerc has also two reviewed resources, one with a rank of three and one with a rank of four. Therefore his karma has a value of 2.9 (See: Resource 1, Resource 2).