Monday, April 20, 2009

TickerSense Blogger Sentiment Poll: A look at the data

In the past I have looked at the performance of the participants in the TickerSense Blogger Sentiment Poll. While the results as a whole have not generated a predictive outcome if they are broken down by individual blogger then some do perform better than others.

In the first part of this examination I will compare the deviation of the net bullishness or bearishness of the blogger with respect to the S&P over five periods; Dec06-Apr07, Apr07-Sep07, Oct07-Mar08, Apr08-Oct08, and Oct08-Mar09. This is different to what I did before when I looked at accuracy - but this will be a later post.

In each of the aforementioned date ranges are 21-22 data points. Each data point was an individual TickerSense Poll. The Poll asks participants if they are bullish, bearish or neutral for the S&P the next month. I assumed a neutral call to be a S&P change from Monday's open to the closing price 1-month later of between +1% and -1%; a change greater than 1% was considered bullish while a loss lower than -1% was bearish.

Each blogger's call on a week was assigned a value; if they were bearish on a given week they got -1, if they were neutral they got a 0 and if they were bullish they got +1. So a blogger who made 22 calls would have 22 values of either -1, 0 or +1. I did the same for the S&P, i.e. If the S&P closed higher a month later the initial date got a score of +1, if the change was between -1 or +1% it got a zero and if it lost more than -1% it got assigned a value of -1. So, like the bloggers the S&P got 22 values between -1,0,+1.

I then summed the values for each blogger and the S&P and expressed each as a value between -100 and +100. Eg. If a blogger had made 10 calls; 3 bearish calls (-3), two neutral calls (0) and five bullish calls (+5) their score would be (-3)+(0)+(+5) = +2 / 10 = 0.20 * 100 = +20.

Each of the Blogger scores had the corresponding score of the S&P subtracted from them (the score for the S&P was calculated in the same way as for bloggers except it was performance, not prediction, which is measured).

Bloggers who matched the S&P perfectly on a given time frame would have a difference of 0. Bloggers who were off would have a maximum difference of 200. Absolute values were used to standardize the difference.

It should be noted, this does not measure the accuracy of the calls Bloggers made, it simply reflects their general 'feel' for the market over each of the time frames. It also doesn't take into account the variation in the number of calls made. The latter point is important as the S&P will always have the maximum number of counting variables in its score (21 or 22), but some of the blogger scores are drawn from only five or six calls over the same period.

The first thing I looked at was the Average Difference between Blogger opinion and S&P Performance for all time periods. Crowder Blog, Traders-Talk, Jack Stevison, Wishing Wealth, Knight Trader, Self Investors, CXO Advisory, and HedgeFolios had only three qualifying periods, the rest of the participants had five.


What was interesting from the initial data crunch was the clustering of the bloggers. Wishing Wealth was most 'in sync' with the market (again, this is an average different to the S&P and does not reflect call accuracy) followed by a cluster of bloggers, Information Arbitrage, Controlled Greed and Traders-Talk around the 30 mark. LearningCurve was the most out of sync with the market. The cluster in the 60-80 range offered no edge on the market.

Tomorrow I will look at the data with respect to time; this will show the shift in performance by 'bullish' or 'bearish' bloggers as markets went from the bull market of 06-07 into the bear of 08-09.

Dr. Declan Fallon, Senior Market Technician, Zignals.com the free stock alerts, market alerts, and stock charts website

0 comments: