Orbscore
Last updated
Last updated
The orbscore is one of the major functions of our platform and one of our contributions to make the crypto space / DeFi safer for the general public. Risk management is a key factor to success in this space ... and we all hate losing money to scammers.
The orbscore is a risk measurement tool which includes different parameters like liquidity, marketcap, on chain metrics, social metrics and combines them into a single number ranging from 0 (very high risk) to 100 (low risk) which enables you to see, how risky a token or your portfolio is.
For all tokens in our database we calculate the orbscore and if you have an OrbTrader bot running with us, you always see the total orbscore of all your tokens in the balance section.
We will also allow you soon to check any external wallet for its orbscore.
If you are a beginner or a degen coinsniper, you should always check the orbscore of a token before you buy it!
The orbscore of a token is displayed in every token report and when you are searching for a token to trade in our database.
In your account section, you can set your personal orbscore. By default this is set to 50 which is a reasonable default for beginners who just want some extra security.
Setting your orbscore there will only show you token on our platform with a score higher than 50. You will not be able to buy any token with a score below that.
If you want to use the sniper strategies, especially the pair snipers, you have to set your orbscore to 0. This will disable this feature and only gives you basic protection.
Warning: We don't recommend disabling the orbscore! Please make sure you know what you're doing!
Internally the orbscore consists of two numbers, the orbscore (o_score) and a relevance score (r_score).
The orbscore includes different on and off chain metrics and calculates a weighted sum which is a number between 0 and 1. On chain metrics are for example the liquidity, the total supply, the amount of holders, etc. Off chain metrics are social followers, twitter mentions, github activity, etc.
The relevance score is a sum of the weighted available parameters. So when we have more information about a token, the relevance score increases. It has always a value between 0 (not relevant, because we have no information) and 1 (very relevant, all information available)
The final orbscore is: orbscore = o_score * r_score * 100
Example:
We have a token with a liquidity pool on a DEX of WETH worth 1000000 USD. The total marketcap of that token is 10000000 USD.
This are two of our params: liquidity and marketcap.
Each of those has a weight assigned to it, how important it is for the overall risk of the token.
liquidity_weight = 0.9
marketcap_weight = 0.5
To scale our token values of 1000000 and 10000000 now to a value between 0 and 1, we need a reference max value. This is one of our special features, as our orbscore is always dynamic and adapts to the market situation. So in a bear market the score might be different, then in a bull market.
The reference value for the liquidity is the total WETH liquidity available. This is a good reference point, as it shows the overall market liquidity available.
The reference value for the marketcap is the fully diluted marketcap of WETH, because we have a token pool in WETH (assuming we have a single pool project... to make things easier).
Assuming now, that we only have this two metrics, we need to calculated the relevance score now:
r_score = (0.9 + 0.5) / 1.4 = 1
Both relevance weights are added, because both information are available, and the sum is divided by the total sum of all metric weights.
So our final orbscore would be:
(1000000 / total_weth_liquidity * 0.9 + 10000000 / fully_diluted_marketcap_weth * 0.5) * 1
This was a very simplified example, to avoid using complicated formulas here, but i hope you got the idea. For example the marketcap alone means nothing without liquidity, so we take the liquidity factor and use that for the marketcap weights ... and of cause the orbscore works with many different params and not just two.
This whole topic is our current research focus and with a growing database of tokens this will improve over time and we will publish the full source code and a detailed paper about this topic.
The most interesting challenge here is to determine the weights and we are using different ai models to find correlations between the input parameter (liqudity for example) and the risk / success of a project.