Wednesday, August 27, 2008

Investor Sentiment that kinda works...

In an earlier article on the work of Ronald Domingues and sentiment surveys one of the criticisms I heard (from our CEO) was the forecasted 1-day return of 0.1% is not a tradable 'entity', irrespective of how accurate survey participants were at calling the outcome. One way this could be addressed is to broaden the return out from 0.1% to 0.3% or 0.5% (I'll do this in a seperate article since it shoehorns respondents into a prediction which they may not have made based on a larger prediction return, i.e. more neutral calls would likely have been made if the suggested predictive returns of 0.3% or higher were required). The other approach I took and is featured in this article was to apply similar filtering rules to the Ticker Sense Blogger Sentiment Poll.

I used the data from my initial performance review of the poll from October 2007 to the start of April 2008 since less crunch work was needed on this data.

The Ticker Sense Blogger Poll is based on a predictive call for a 1% change or more over a 30-day period for the S&P (bull or bear depending on whether the S&P would finish higher or lower). A neutral call falls in the less than 1% range either side of no change.

Because of the smaller data set I split qualification based on 40% correct over the previous month (effectively 2 out of 5 correct calls - so it's a bit crude). Individuals making the grade were grouped as "Qualifiers" and the rest were lumped as "Non-qualifiers". The predictive power of the qualifiers versus the non-qualifiers was compared over the next month (which in itself was a qualification period for the following month).

There were a total of four rounds.

The first point made was the very low accuracy call across participants; 24.5% in this case compared to Ronald's 65.2% accuracy rate of his participants. Adding salt to the wound, the average accuracy call of qualifying bloggers was 25%, compared to 24% for non-qualifying bloggers. There was no predictive power from bloggers who were 'better' at calling the market during the qualification period.

The results broken down by round perhaps suggest some inkling of predictive power from qualifying bloggers. The real problem was when qualifying bloggers were bad with their calls they were real bad, whereas non-qualifying bloggers remained relatively consistent across time frames (except for the last round). This needs more data and the next review of our bloggers from April to the present day will flesh this out a bit more.

In terms of standard error bar overlap; round 1 and 3 went to qualifying bloggers, round 4 to non-qualifying bloggers and round 2 was a tie.

How about the perfomance of individual bloggers?

Random Roger qualified for all four rounds of participation and was the only blogger to do so.

Qualifying for three rounds were 24/7 Wall Street, Information Arbitrage, Self Investors and Traders-Talk. The two rounders included myself, with a glut of bloggers making the cut for one round.

There are caveats and this impacts on the predictive power of bloggers. Since my analysis of this poll started (December 2006) up to April of this year Random Roger has been 100% bearish on the market, while Carl Futia has been 100% bullish. In my May 2007 analysis Carl made more correct calls than Roger as the S&P was in the latter stages of its cyclical bull market.

Did Roger predict the current bear market early? At the start of December 2006 the S&P traded around 1,396. It closed Tuesday at 1,272. So Roger was right to be steadfast in his convictions. But during this period the S&P topped at 1,576 in October 2007. So is this case of strong predictive power? Or a case of a market moving in sync with a blanket bull/bear call thereby giving the perception of predictive power? More interestingly, when will Roger turn bullish, or Carl turn bearish?

As an aside, is there an increase in traffic to bearish bloggers during down trends and bullish bloggers during up trends?

Comparing the traffic of Carl's and Roger's sites over the past year (selected on the basis they were the only bloggers to report 100% bull and bear calls). Based on Quantcast estimates, Roger got a nice bump in daily visitors in February and held that readership through the year. Carl has seen a steady rise since March.

Matching visitor spikes to either blog didn't suggest any direct or contrarian relationship in marking bullish or bearish phases in the market. However, peaks in traffic of both blogs were common around reversals; market tops more so than bottoms. This is backed by an earlier piece I did on blog traffic and market reversals, which at the time of writing marked the May top and measured a 16.6% decline from May high to July lows.

So maybe it is blog traffic, not blogger opinion, which provides the best predictive power for markets.

Dr. Declan Fallon, Senior Market Technician, the free stock alerts, market alerts, and stock charts website