I've just up-graded my free Flickr account to a "pro" account for about £13.
One of my reservations about Flickr is the fact that the comments are 99.99% favourable, no matter the nature of the picture. Thus, it's a bit of a back-slapping exercise, rather than a true forum for critical discussions. Unlike this venerable blog.
However, there is an upside. Well, several, I think.
Firstly, there is a lot of great stuff on there. But you have to look. If you do find great stuff, make it a favourite, and learn from seeing that work, that approach, that style. I like this aspect.
Secondly, and the main meat of this post, with the pro account comes the rather natty "Statistics" package. This breaks down all the traffic on your photostream, tells you which images have the most views, the most comments, the most favourite tags, and can do all kind of funky stuff with the data. So, while the comments on individual images are not all that useful, if you look at the total picture, it can reveal things about the images that the comments do not.
In addition, I have entered some of my photostream images to club competitions, so I know how well they have done. I can compare this to the results from my stats, and see how well it's borne out. Like in any clinical trial, individual results don't give the complete picture, but the larger the sample group is, the greater the statistical power, and the more relevant any trends become.
To test this, I have had a look at some images that I know I have submitted to club competitions and that I also have had on my photostream for some time now. Here are how they break down for a selected few, from a photostream of 44 images:
Window and Chair - 1st in a print competition; most popular in Flickr
Sunset Beach Walkers - digital pictorial image of the year; 3rd most popular in Flickr
Shadowman - creative image of the year 07/08; 6th most popular on Flickr
New York Ferry Moment - bombed in CCC; 16th on Flickr
Help - bombed in CCC; 18th on Flickr
Lady - bombed in CCC; 19th on Flickr
The others I have not submitted before, so can't compare, but I have submitted two images to the competition next week that are on Flickr, and are currently residing at 4th and 13th, which would imply that one will get something, and the other will bomb, which was my EXACT pattern last year in competitions with two entries! So, we'll see...
To complete the picture, I'm going to upload all my entries from last year, and see how they do. Clearly this is not scientifically controlled, but I'll try not to bias the descriptions or tags to favour one over the other. Also, I have submitted some images to other groups, and some not, so there's clearly a difference in the exposure of the images to the wider Flickr community. I'll try to be even with this too, but if an image gets and invite and another doesn't, that too is a measure of it's popularity, so I can't ensure exposure is even totally and neither should I.
Ivan
Subscribe to:
Post Comments (Atom)
1 comment:
Well that is interesting and a good method (being a scientist) to measure the 'value' of a photo. For me, this might be useful to determine whether I should put a photo on the market or not. Something which, at the moment, I have to test myself by putting it out there and seeing if it is, and how many are, bought.
The other issue with the measure of exposure of course is how long it has been on Flickr, but one could get around this. I would imagine that if one plots a graph of time on the x-axis and number of NEW hits on the y-axis that all photos should show a similar trend: that is, at some point in absolute time the number of hits will reach an asymptote and thereafter increase ever so slowly. It might be that all photos reach the asymptote at the same time, or roughly so. You can also, on the x-axis, put the number of cumulative hits and it will show a similar trend. Using this graph you can determine at what point all photos will have received a relatively equal exposure.
Dr Lidgard
Post a Comment