Ada Lovelace Day

About The Authors

Suw Charman-Anderson

Suw Charman-Anderson

Suw Charman-Anderson is a social software consultant and writer who specialises in the use of blogs and wikis behind the firewall. With a background in journalism, publishing and web design, Suw is now one of the UK’s best known bloggers, frequently speaking at conferences and seminars.

Her personal blog is Chocolate and Vodka, and yes, she’s married to Kevin.

Email Suw

Kevin Anderson

Kevin Anderson

Kevin Anderson is a freelance journalist and digital strategist with more than a decade of experience with the BBC and the Guardian. He has been a digital journalist since 1996 with experience in radio, television, print and the web. As a journalist, he uses blogs, social networks, Web 2.0 tools and mobile technology to break news, to engage with audiences and tell the story behind the headlines in multiple media and on multiple platforms.

From 2009-2010, he was the digital research editor at The Guardian where he focused on evaluating and adapting digital innovations to support The Guardian’s world-class journalism. He joined The Guardian in September 2006 as their first blogs editor after 8 years with the BBC working across the web, television and radio. He joined the BBC in 1998 to become their first online journalist outside of the UK, working as the Washington correspondent for BBCNews.com.

And, yes, he’s married to Suw.

E-mail Kevin.

Member of the Media 2.0 Workgroup
Dark Blogs Case Study

Case Study 01 - A European Pharmaceutical Group

Find out how a large pharma company uses dark blogs (behind the firewall) to gather and disseminate competitive intelligence material.


free page hit counter



hit counter script


All content © Kevin Anderson and/or Suw Charman

Interview series:
at the FASTforward blog. Amongst them: John Hagel, David Weinberger, JP Rangaswami, Don Tapscott, and many more!

Corante Blog

Tuesday, October 20th, 2009

John Mair demonstrates how to really not get it

Posted by Suw Charman-Anderson

I’m sure everyone’s fed up of the Jan Moir debacle that’s been occupying the UK Twittersphere for the last week, but I was made rather cross by this ill-judged and misinformed article by John Mair on Journalism.co.uk yesterday.

For those of you blessed enough not to have heard about the Jan Moir/Daily Mail controversy, suffice it to say that she wrote a hateful and homophobic article about Boyzone singer Stephen Gately, who died of a previously undiagnosed heart condition. Moir’s piece caused uproar amongst the online community, particularly on Twitter, causing some advertisers to remove their ads from the page and forcing Moir to apologise (in a manner of speaking). There have since been acres of print and pixel devoted to unpicking it all.

One such piece by John Mair, a senior lecturer in broadcasting at Coventry University, makes a number of mistake that I think are themselves worth unpicking.

Mair’s first mistake is to say that “blogosphere went mad seeking revenge”. Lots of people were very cross with Moir’s piece, but to dehumanise people’s reactions by lumping them all together as “the blogosphere” and then to trivialise the reaction as “going mad” and “seeking revenge” is to mischaracterise the entire episode. It implies that everyone who reacted to Moir’s piece somehow lost their sense of proportion and overreacted in a little moment of insanity. This is rather insulting - people were justifiably cross with Moir and the Mail and, whilst people were vociferous, to characterise them as seeking revenge is hyperbolic.

Mair’s second mistake is in his second paragraph where he implies that celeb-Twitterers Stephen Fry and Derren Brown organised the protests on Twitter and Facebook. That’s also not true - this wasn’t a crowd, baying for blood and lead onwards by the Twitter elite. Stephen and Derren were, like everyone else reacting to a rapidly spreading meme. There was no movement and they did not organise anything. They just helped the meme along. (It’s important to note that memes are like ocean waves - they don’t move the water itself, they move through the water.)

A little later on, Mair asks, “So how democratic are these manifestations of the virtual mob?”.

Ok, so what exactly is “democracy”? The dictionary on my Mac says:

democracy |di?mäkr?s?|
noun ( pl. -cies)
a system of government by the whole population or all the eligible members of a state, typically through elected representatives : capitalism and democracy are ascendant in the third world.
• a state governed in such a way : a multiparty democracy.
• control of an organization or group by the majority of its members : the intended extension of industrial democracy.
• the practice or principles of social equality : demands for greater democracy.

Looking at that list, none of those really apply to the phenomenon we observed. There was no organisation and no group ergo no members, unless - and I think this is where Mair gets confused - unless you label the people who complained, post hoc, as a de facto group that must therefore have organisers. That’s a rationalisation that doesn’t hold water - anger with Moir spread through Twitter organically: as one person Tweeted their disgust, others found out about the article and then expressed their own feelings. There was nothing orchestrated about it and the concept of ‘democracy’ cannot and should not be applied. A spontaneous expression of a shared opinion is not a democracy.

What about “mob”?

mob |mäb|
noun
a large crowd of people, esp. one that is disorderly and intent on causing trouble or violence : a mob of protesters.
• (usu. the Mob) the Mafia or a similar criminal organization.
• ( the mob) the ordinary people : the age-old fear that the mob may organize to destroy the last vestiges of civilized life.

Was there a mob? There certainly were a large number of people involved, but were they a crowd? Were they grouped together in one spot and intent on causing trouble or violence? I think it would be stretching the definition of ‘mob’ too far to use it to describe the people upset by Moir’s homophobia.

Mair then tells us that the internet is a double-edged sword, something which is undoubtedly true, although it is more accurate to describe the internet as neutral - neither good nor bad, and therefore capable of being used for good or bad. But the tone of his assertion implies that actually, he thinks the internet is baaaaad.

Now we get to the meat of the wrongness of this piece. Mair compares the expression of disgust at Moir with the hounding of Jonathan Ross and Russell Brand.

It can lead to interactivity and enrichment but it can also lead to bullying by keystroke. The zenith of that was the Jonathan Ross/Russell Brand row in the autumn of 2008 but nowadays broadcasters, especially the BBC, are facing ‘crowd pressure’ from internet groups set up for or against a cause or a programme; they are an internet ‘flash mob. With the emphasis, maybe, on the ‘mob’.

When Jonathan Ross and Russell Brand rang up the veteran actor Andrew Sachs on October 18 2008 and were disgustingly obscene to him about his grand-daughter, that led to a huge public row on ‘taste,’ mainly stoked by the Daily Mail and the Mail on Sunday.

Fuel was added to the fire through comments by the Prime Minister. The ‘prosecuting’ virtual group was the editorial staff of the Mail newspapers and its millions of readers in Middle England. In support of the ‘Naughty Two’, more than 85,000 people joined Facebook support groups. Many, perhaps most, had never heard the ‘offensive’ programme. Just two had complained after the first broadcast.

The BBC was forced after a public caning to back down, the director-general yanked back from a family holiday to publicly apologise, Brand and his controller resigned and Ross was suspended from radio and television for three months. The virtual mob smelt blood: it got it.

The Ross/Brand incident bears no resemblance to the Moir incident. Ross & Brand’s stupidity would have gone unnoticed by the vast majority of people had the Daily Mail and the Mail on Sunday (and a variety of other newspapers) not brought it to their attention and demanded that ’something be done’ - that something, of course, being complaints to the BBC.

There was no “‘crowd pressure’ from internet groups” nor was there any sort of “internet ‘flash mob’”. There was only pressure brought to bear by the tabloids via the medium of the internet. The protest was not grass roots, it was orchestrated (oh the irony!) by the Mail and Mail on Sunday. Mair knows this, as he explicitly states it, yet still he uses this example as illustrative of the awfulness of the internet and the propensity of internet users to mobbish behaviour. Sorry, Mair, I call bullshit.

Mair then goes on to cite another irrelevant example, the protests over Jerry Springer; the Opera:

Fifty five thousand Christians petitioned the BBC to pull it from the schedules because of its profanity and alleged blasphemy. They engaged in modern guerilla warfare tactics to try to achieve their aim. Senior BBC executives had to change their home phone numbers to avoid that pressure. That campaign did not get a ‘result’. If Facebook had been in full flow then, the 55,000 may well have been 555,000 and the result very different.

The offended Christians were, again, organised. And again, it was not a spontaneous outpouring of dissatisfaction. They did not use “modern guerilla warfare tactics”, they used the communications tools open to them at the time, just like everyone else does. They didn’t succeed in getting the opera pulled, perhaps because the BBC felt that, in this case, the claims of offence were out of proportion. Would they have been successful had they been able to use Facebook? I would hope not, but the BBC’s spine does go through soft phases.

Mair concludes with:

This is activism by the click. It needs no commitment apart from signing up on a computer. It gives the illusion of democracy and belonging to a movement whereas in reality is it membership of a mob, albeit a virtual one? Is this healthy for democracy and media accountability or not?

Here Mair lays his biases bare. He may as well have said, “I just don’t like the whole idea of the audience having opinions and having a way to express those opinions. The fact that lots of people seemed to agree - quite independently - about how awful Jan Moir’s article was puts the fear of god up me, because suddenly I am accountable not just to my paymasters, but to my audience. Directly. And who’s going to protect me when these scary people with opinions come knocking at my door? Wasn’t it so much nicer in the old days, when the audience couldn’t answer back?”

Groups of people on the internet who all express a similar opinion are not de facto mobs. Expressing an opinion can be a part of democracy, but democracy is not simply the expression of opinion.

Mair’s piece is risible. He fails to understand Twitter, sees this as an opportunity to demonise the internet and draws false comparisons between unrelated incidents. Frankly, the media’s buggered if this is the prevalent attitude in our universities.

Monday, July 13th, 2009

The plural of anecdote is not data

Posted by Suw Charman-Anderson

The City, and sections of the media, are getting a touch over-excited by a “research note” written for Morgan Stanley by Matthew Robson, a 15 year old on work experience. The Guardian said:

The US investment bank’s European media analysts asked Matthew Robson, an intern from a London school, to write a report on teenagers’ likes and dislikes, which made the Financial Times’ front page today.

His report, that dismissed Twitter and described online advertising as pointless, proved to be “one of the clearest and most thought-provoking insights we have seen – so we published it”, said Edward Hill-Wood, executive director of Morgan Stanley’s European media team.

“We’ve had dozens and dozens of fund managers, and several CEOs, e-mailing and calling all day.” He said the note had generated five or six times more responses than the team’s usual research.

The research note itself can be read on The Guardian’s site.

I’m going to start by giving Robson the kudos that he deserves. He has written a very well thought out piece which describes the media habits of him and his friends. In no way do I want to criticise a teenager for being thoughtful, engaged and articulate.

But one has to put this research note into context: This is one teen describing his experience. It is not a reliable description of all teens’ attitudes and behaviours, yet both Morgan Stanley and the media seem to be treating it as if Robson has Spoken The One Great Truth. “Twitter is not for teens, Morgan Stanley told by 15-year-old expert” coos The Guardian. “Note by ‘teenage scribbler’ causes sensation” says the FT in astonishment.

Neither Morgan Stanley nor the media seem to be able to tell the difference between anecdote and data. This “research note” is more note than research, and it should not be taken to be representative of all teens. A teenager in a rural setting, or in an inner city estate, or one who feels socially excluded from web culture will have a very different experience than a teen who’s well-connected enough to get himself an internship at Morgan Stanley.

What is worrying about this is not Robson’s note: He’s simply doing what most teens (and most adults) do, which is to extrapolate from his own and his friends’ experience to form generalisations about the world around him. It’s a very human thing to do, but the important thing about businesses like Morgan Stanley, and the journalists who write about them, is that they are supposed to be able to tell the difference between data and generalisations. Yet they don’t seem able to sort the wheat from the chaff. It seems yet another symptom of the group-think in the media and financial sector that led to the Great Recession, rather than an indication that we have learned anything from it.

Sarah Perez on ReadWriteWeb says:

Matthew Robson, a 15-year-old intern at analyst firm Morgan Stanley recently helped compile a report about teenage media habits. Overnight, his findings have become a sensation…which goes to show that people are either obsessed with what “the kids” are into or there’s a distinctive lack of research being done on this demographics’ media use. Robson’s report isn’t even based on any sort of statistical analysis, just good ol’ fashioned teenage honesty. And what was it that he said to cause all this attention? Only that teens aren’t into traditional media (think TV, radio, newspapers) and yet they’re eschewing some new media, too, including sites like Twitter.

Well, research has been done. danah boyd has done some excellent research into the use of the web by teens - it’s her speciality and she’s one of the foremost experts in this area. Her research would help Morgan Stanley understand the teen demographic much more clearly than any single anecdote, however well written, ever can. The fact that they haven’t ever had a clear insight into the teen demographic would seem to imply that their existing researchers and analysts aren’t doing their jobs properly. The information is out there, a lot of it is freely available, and all that remains is for someone to read it and write the report.

This story also feeds into the concept of the ‘digital native’ which, as I’ve blogged before, is a very poor way to talk about a very diverse section of the population. But because this report fits in with widely-held assumptions about teens and technology - not only does it describe ‘digital natives’, it’s written by one too - it’s immediately accepted without query or question. Morgan Stanley and the media both seem to be more interested in having their biases validated than they are in exploring the evidence to see where it leads them. Sadly, it seems that neither have been spending enough time watching CSI and drawing from it key lessons about assumptions, evidence and how to draw conclusions.

If I had been Matthew Robson’s boss at Morgan Stanley, on receiving his report I would have praised him on his good work and then asked him to look for evidence to either support or refute his points. That would have been an interesting exercise for Robson, and would have led to a research note that actually had some research in it. Instead, Morgan Stanley seem to have taken his work as gospel. I wonder why. Perhaps it was because they thought that, as a 15 year old, he’s privy to the inner workings of mysterious teen minds, a High Priest in the Digital Native Mythology?

If I relied on Morgan Stanley for anything, I’d be rather concerned right now regarding their lack of critical thinking.

Thursday, November 6th, 2008

The nature of work - visible, invisible, and that doesn’t look like work

Posted by Suw Charman-Anderson

As I mentioned in my last post, Proxies for productivity, and why no one trusts teleworkers, I think one of the big problems facing business right now is the fact that they do not understand what work is, and what it isn’t. I outlined the four most common proxies for productivity that I’ve noticed at play in the businesses I have observed:

  • Number of emails received
  • Amount of time spent in meetings
  • Length of the work day
  • Distance travelled and jetlag suffered

Now this is not to say that email, meetings, long days and travel aren’t sometimes needed, or don’t form an important part of what work is in the knowledge economy. A small number of emails are important; meetings can occasionally be very productive, not just from the point of view of making decisions but also for the high-value relationship building that can only be done face-to-face; sometimes long days can be not just necessary but also productive; and every now and again you really do need to get on that plane.

I’m keen not to throw the baby out with the bath water, but to make the point that whilst sometimes these activities are genuinely important, mostly they are not. When they have become goals in and of themselves, instead of a means to achieve a goal, they have shifted from being useful tools to proxies for productivity.

Think about the playground marbles champion, who holds his position primarily because he’s managed to win, buy, steal or otherwise acquire a very large collection of marbles, rather than because he’s actually good at playing the game. People who believe that they are working hard because they get lots of email, do lots of meetings, always work long hours and travel a lot have done nothing more than fill a very large bag full of marbles.

So if all of this activity, this busy-ness, is only rarely actual work, what is work? For a couple of years now, I’ve been in the habit of thinking of work as falling into two categories, one easy to define, the other a lot less so.

Visible Work
This is all of the stuff that other people can see you doing. Obviously, the proxy activities fall into this category - if they weren’t very clearly visible to your peers and your managers, they would be no use as proxies. Document writing, coding, designing, phone calls, conferences, presentations… the list is almost infinitely extensible.

These are things that easily answer the question, “What is Alice doing?” They are the knowledge economy equivalents of manufacturing industry work: behaviours that result in something, whether tangible or digital, that is easily described.

Invisible Work
One of the big problems with working in a knowledge job is that much of your work is done in your head. There is no way to embody what goes on in your brain, no matter how important it is in helping you to attain your goals. Indeed, a lot of what knowledge workers do is very creative, and creativity needs to be fed. That means knowledge workers can often end up doing things that, to the uninitiated, look like anything except work. Talking to colleagues around the water cooler, gazing off into the middle distance, getting up from your desk to go sit somewhere quiet… thinking.

When I worked as a web designer for PwC, back before the Great Crash, the head of our studio and our lead designer both recognised the importance of invisible work (although I doubt they conceptualised it like that). We were encouraged to spend time fiddling about with new ideas, we were taken on days out to the Science Museum for inspiration, we could talk to each other and do whatever we needed in order be creative.

But despite the fact that thinking is an essential part of knowledge work (it wouldn’t be knowledge work if it didn’t involve thinking, it’d just be… information work or data work) we give people very little time to pause, reflect, and consider their actions. It’s all go go go, all about the visible work. Because consideration looks far too much like inaction from the outside: the real work is going on inside your skull, and short of hooking everyone up to brain scanners, there’s no real external sign that anything at all happening in there.

So the knowledge worker either has to find a way to feign work in order to get a moment to think, or has to do it on their own time, mulling things over on the commute to work or under the shower. The deep, intense conversations that spark a revelation have to happen at lunch, or down the pub, or not at all, because “chatting” is skiving. (Unless, of course, it’s scheduled in the diary in which case it could be a meeting… but then your brain falls into meeting mode and, after years and years of bad experiences in meeting rooms, your creativity slinks off to a corner and quietly dies.)

Now, after a couple of years of thinking about this and watching what goes on around me, I want to add a new category to the list:

Work That Doesn’t Look Like Work
The internet has had a very bad rap over the last ten years. One person I know tells the story of how he used to do research for his job using internet tools, primarily a browser and Skype, but started to notice a chill in the work atmosphere. When he asked a colleague what was going on, she replied “Well, we see you using a browser, and… well… we only use the internet for booking holidays and buying stuff on eBay, so we assume you’re doing the same thing.”

People - peers and managers alike - too often equate the browser with skiving, an accusation which as never been fair. When I was a music journalist back in the late 90s, I could not have done my job without using the internet for research. It was an invaluable tool then and it’s an even more invaluable tool now. I cannot imagine how I could do my job without having the internet to provide not just information, but inspiration. Indeed, I would not want a job that cut me off from the web. It would be like undergoing a lobotomy.

Of course, businesses have had intranets - accessible only through a browser - for years, but many of them were under-utilised and so awfully designed that they provided clear visual clues that, whatever it was that you were doing on that site, it wasn’t going to be fun. (And, therefore, had to be work… oh, what a sad indictment of our attitudes.)

But now it’s hard to tell at a glance whether the blog or wiki or social bookmarking site that someone is using is business-related or not. (Even the definition of “business-related” is getting very loose and floppy, with information and insight coming from all sorts of strange places.) And given that many businesses are now using these tools internally anyway, the browser is no longer the sad second cousin of “real” office tools, but rapidly becoming The Daddy.

The question is, will attitudes keep up? Truth is, they can’t afford not to.

If companies want to survive the current economic crisis, they are going to have to start getting a handle on what “work” really is, and in particular, address some of the old misconceptions that are still prevalent about the nature of work. They need change the way that they judge how hard someone is working and re-evaluate their concepts of productivity. Because right now, they are engaging in strategies that are actively damaging their ability to function and, indeed, to survive in these straitened times.

Tuesday, November 4th, 2008

Proxies for productivity, and why no one trusts teleworkers

Posted by Suw Charman-Anderson

One of the biggest challenges facing business today is understanding the cultural changes that are required to truly put our manufacturing past behind us and face up to the new knowledge economy that we find ourselves in, like it or not. Over the years I’ve had a peak inside a wide variety of companies, everything from the five person start up to the multinational corporation and it’s blindingly obvious that we haven’t yet moved on from Taylorism, where managers are focused on create efficient processes and eradicating the opportunity for error. (The wrongness of a focus on process could be a whole series of posts on it’s own, but I’ll let it be for now.)

Most businesses are still treating work and workers as if they were producing physical objects like spanners and the fact that they are not actually producing anything tangible causes a serious problem when attempting to understand, let alone measure, productivity. What does it mean to be productive in a knowledge economy job? From a company perspective, there’s always the profit margin to give an overview of how well the business is doing, but on an individual basis, that doesn’t help us at all. How can we tell whether Alice’s work contributed to the bottom line? How do we know if Bob is working to the best of his capabilities or slacking off? How do we compare Carol to her co-workers, when she does something completely different to Alice and Bob?

Nature abhors a vacuum, and in the the absence of any genuine measures of productivity, we create our own ways of trying to understand how well we are doing compared to our colleagues. We are social creatures for whom status is important, so when we compare our own behaviours to those around us, we look for obvious measures of success and, thence, status. Those measures are like a sort of conceptual creole, the melding of the ideas of Taylorism and the realities of the modern job to create a set of proxies for productivity that are almost universally agreed upon, despite the fact that no one knows how or when that agreement occurred.

It’s important to note that all of these proxies come with a martyrdom complex - people boast about their sacrifices, expecting to elicit both sympathy and awe from colleagues. The bigger the sacrifice, the more sympathy and awe they get, and they get caught in a self-reinforcing cycle: the bigger martyr they are, the higher status they have, so the more motivation there is for sacrificing yet more.

The Email Proxy
More emails received indicates higher status.

This is probably one of the most common and damaging proxies for productivity and it almost seems to feed off a fame-like mechanism. We all know that being famous sucks, yet celebrity is still a big draw and many people who say they would eschew a chance to be famous would really, deep down, jump at the chance if it came along. We all know that getting hundreds of emails a day sucks, yet when our inbox gets that busy we feel proud of it, as if we are making a sacrifice for the sake of our increased status.

The Meeting Proxy
More time spent in meetings indicates higher status.

People simultaneously boast about their seven hour meeting marathon to colleagues, whilst also attempting to elicit sympathy about what a horrible day they’ve just had. Yet there is rarely any serious attempt to reduce the time spent in meetings or to avoid going to unnecessary ones. Indeed, in many cases, even people who are aware of how pointless some of their meetings are feel pressured to go anyway because they fear that their bosses will interpret their absence as “slacking off”, or because they don’t want to be excluded from any decisions that may get made in their absence. (They know that this is a proxy, but they also know that their bosses may not see it like that.)

The Time-At-Desk Proxy
A longer work day indicates higher status.

Not only do some people take a perverse pride in how long they end up staying at work, but they look down on those who do not spend (or seem to spend) an equal amount of time at their desk. Part-timers are viewed very negatively, and, indeed, the term ‘part-timer’ becomes an insult thrown at anyone who perhaps leaves early one day, or gets in late.

The Travel Proxy
More miles travelled to meetings, or more jetlag incurred, indicates higher status.

This proxy only really applies to a subsection of the workforce who have to travel for their job, but when it’s in place it’s just as powerful as any of the other proxies. Sometimes the travel is about commute time, or time spent on trains, but for some it’s really about how long you had to spend at the airport and how jetlagged you are. There’s a degree of machismo involved too, as people travel daft distances for short meetings through which they are barely awake due to the effects of exhaustion and jetlag. These experiences are perceived as demonstrating toughness and commitment, rather than the excesses they really are.

Firmly embedded
These proxies for productivity are so firmly embedded in business culture that I suspect they are used, whether consciously or not, as ways to gauge how well someone is doing and who deserves reward. Goals may be set at an annual review to help provide some sort of objective measure of how well you are doing, but can you really imagine someone who hardly ever used email, didn’t go to meetings, spent little time at their desk and rarely travelled, yet who met or exceeded all their goals, actually being popular with their boss? Anyone who behaved like that, no matter how effective they actually were, would be perceived as a slacker. And as we all know, perception is much more important than reality. That’s how real slackers get away with it - they look busy all the time, even though they achieve very little.

The irony about these proxies is that, of course, they are focused on the least productive ways you can spend your time. Email is a time sink, meetings are a waste, excessively long days decrease your productivity, and well, who really gets all that much done on a long journey? By allowing these proxies to stand, businesses are not only encouraging their staff to make false judgements upon their own and others’ productivity, they are also encouraging the very sort of behaviours that they should be working to minimise.
This is pretty bad news for social media, which disintermediates these proxies by reducing email, reducing the length and frequency of meetings, allowing people to be seen to be working even when not at their desk (and potentially reducing the amount of time they need to work to get the same amount of stuff done), and reducing the need to travel. Whilst these proxies are fixed firmly in people’s minds as a measure of their own effectiveness, then we’re going to have a very difficult time persuading people that it’s in their interests to adopt different and more effective ways of working.

A bigger problem, of course, is that most business leaders are in denial that there could be a problem with the culture of their organisation. One of the most dysfunctional companies I have ever come across, where decisions are arrived at seemingly at random, no one takes responsibility for those decisions, and the main mode of communication is shouting, also thinks it is the most egalitarian company out there. It’s not in the business leaders’ interests for them to examine or address the dysfunction of their business, because it’s that dysfunction that got them where they are, and keeps them there. If they suddenly had to become competent, well, that would be problematic.
Why no one trusts teleworkers
The great dream of teleworking hasn’t come true. We are not seeing companies rush to let their staff work from home, even though internet access and a phone is pretty much all that a lot of people need to do their job. I think the reason we haven’t seen a sea change in the way that we work is not because of the technology - I work from home most of the time, and even the basic tech I have on my Mac is enough for me to do my job perfectly well - It is because no one trusts the teleworker.

Three of the four proxies for productivity are removed in the case of the teleworker. The whole point of working from home is that you are not at your desk in the office, are not in meetings, and are not travelling. That leaves just email as a proxy, but for most managers that’s just not enough. They have never really sat down and thought about what their team actually does on a day to day basis, never considered how that might be measured, and what those measurements might mean (if anything). Instead, the forcible removal of three proxies simply leaves an uncomfortable hole in their subconscious reckoning of how hard someone is working, which allows in the fear that they are in fact not working at all, which then makes them reluctant to allow anyone that opportunity.

Social media can do a lot to help the teleworker connect with his or her colleagues, particularly applications that support declarative working (like declarative living, but, well, at work), helping make explicit the previously implicit acts of work that make up each working day. But again, the cultural barriers are high and it will take a determined and brave leader to change their business culture enough to allow teleworkers’ managers and co-workers to fully understand and trust them.

Monday, October 15th, 2007

It’s not just newspapers

Posted by Suw Charman-Anderson

Last week, Kevin wrote about Alan Mutter’s Brain Drain post on how the journalists who most get this new digital era are the people least likely to be able to effect change within their organisations, and how many of them are looking to get out of the media because they can’t see a future for themselves there. Many voices from the journalist blogging community chimed in, and Kevin does a good job of linking to some of the most prominent posts. But I have something really very, very important to say to everyone who reads Strange Attractor who isn’t in the newspaper business.

It’s not just newspapers losing their brightest talent.

I have a lot of conversations with a lot of different people from a lot of different places, and recently a theme has started to emerge. The people who most clearly understand the way that the internet and Web 2.0 is transforming business are leaving jobs that frustrate them with companies that don’t get it, and are either finding other jobs with companies that do get it or are cutting loose completely and going freelance. And I’m not alone in this observation - Dennis Howlett blogs about a conversation he had with a Barclaycard developer who was profoundly unhappy with his job because there was no opportunity to innovate:

I was struck by the profound sense of frustration experienced by this person. Geeks invent stuff. They solve problems. They love puzzles. Stifling the ability to engage in those activities is anathema. It’s like sucking out the oxygen they need with which to thrive. Any time organizations do that to anyone, productivity plummets.

It’s not just geeks, either. On more than one occasion I have been brought in to talk to a company by someone who sits in the room with me and nods vigourously (but often silently) as I speak. When they do talk, I find myself nodding vigourously as well and it becomes clear that they are on the right track, that they understand social software and the changes currently being wrought. One day, I asked one of my contacts, “Why did you bring me in when you so obviously know what you’re talking about?” The response came, “Because they won’t listen to me - maybe they will listen to you.”

These people aren’t journalists or developers; this isn’t about a particular industry or job title. These are people who have a passion for the internet, who see how useful social tools can be, who just want to make small changes that might have a big impact, but they can’t, because management won’t let them. Whether that’s via direct commandments or through an anti-change, anti-innovation, anti-technology culture that’s been fostered by them doesn’t matter - the fact is that smart, innovative people aren’t being allowed to experiment, and they’re getting so frustrated by it that they are leaving to go elsewhere.

It’s not just newspapers that need to wake up to the fact that their middle managers and CXOs just might not have the right skillset and mindset to help them survive the digital era. As far as I can tell, that problem is rife in all industries. And any business that refuses to take notice of its own talent, (or even the knowledge of digital experts - who, it has to be said, may turn out not be white, male and middle-aged, and may even come from outside your sector), is going to find itself very much at the bottom of the heap as their brightest people go off to help more open and aware companies.

Wednesday, October 3rd, 2007

FOWA07b: Robin Christopherson

Posted by Suw Charman-Anderson

The art of attractive, yet usable (accessible) sites
Many sites that are very pretty but as soon as you do anything different, things go to pieces. Attractiveness shouldn’t be fragile, needs to be robust and be sure that site is nice under a wide range of conditions.

What has accessibility got to offer usability. The DTA is a law that talks about accessibility, and over 90% of sites do not comply. Sites that meet DTA standards are also easier to use for able-bodied people, not just disabled people.

Legal and General re-launch, spent £200k on accessibility, and found a huge upsurge in mainstream users as well, 30k hits extra in first use, just about increased platform compatibility. People wanted the site to work on their platform, regardless.

Speciality browser called HomePage Reader, free, renders the page in a text-only view. Has a screen reader that reads the text. Goes to Amazon, reader reads out all the text, which turns out to be all the text for the images. No skip to content, so is forced to listen to the [image ...] tags, because none of the images are labelled.

Google Mail, Ajax application, but unless you are using the screen reader you won’t know that there’s some hidden text at the top of the page that suggest that screen reader users use the basic HTML version. Heavy use of javascript in a web app can be got round by providing a basic version too. Google felt they wanted to go that extra mile to provide a completely JS-free version, which you don’t have to do, but have to make sure that your JS doesn’t mess with screen readers.

Google Maps. There is a text-only version which isn’t sign-posted, the URL isn’t publicised, but it can do everything that Google Maps can do, so can look up florists in Melbourne, or whatever, all that functionality is there. Have implemented this parallel alternative, but there are so many platforms that could benefit from this text-only version.

Google Accounts, if you want to set up a new account, uses a ‘captcha’ image, which obviously doesn’t have an alt-tag. If you are trying to sub to Yahoo, forget it. But Google has two things - an audio link, but that’s really hard to hear, has not been able to understand the string, which is a bit self-defeating. Or there’s a link to contact Google and they will contact you personally by email to help you set up your account. Requires manpower to provide that service, but they obviously take accessibility extremely importantly, and it shows that they are going to be a force to be reckoned with - little things like this make it the default choice for people with a disability.

if you’ve got someone who tests your site who has a disability, if the site is optimised for them, it makes it much easier for mainstream users.

Everyone knows about captioning, and multimedia more important in Web 2.0, but with YouTube you can’t caption, you have to ‘hard burn’ them into the video. Go home tonight and watch TV with your eyes closed - you can’t follow the action. Audio description actually tells you what the action is in a video, and it makes video followable for the vision impaired.

The vision impaired are the hardest category to cater for. Window magnifier which makes the web page much much bigger, e.g. x5. Demos inconsistent navigation, pop-ups, etc. which all make it hard for magnification user to find their way around. Pictures of words get hard to read - get pixelated. Another reason for providing a basic text version.

General Motors website looks lovely. Might want to increase the text size, which is often hard-coded which means people can’t make it bigger so that they can read it. Have to reset the browser prefs to ignore the text size, which many people don’t know you can do, but it breaks the navigation with overlapping links, etc.

Vodafone did a better job, marked up all the headings and navigation, but whilst they have allowed people to change the text size, it corrupts the page, with text showing over other text. Colour is the same problem. If someone asks their browser to ignore specified text colours, content can totally disappear.

This isn’t just about websites, but also applications on the desktop. Shows a mystery meat navigation that requires hovering over images to get up a button a few pixels wide. Not available to people who can’t use a mouse, as no keyboard shortcuts.

Google Search. When you search, you get next page links, 1 2 3 4 5 … and that makes it possible for people with co-ordination issues or problems with using a mouse to click the links.

Vatican website with circular navigation, and tabbing from link to link, with default IE dotted line highlight, and the tabbing order is the most bizarre in the word. Need to make sure that sites work and are usable from the keyboard.

Flash breaks the whole screen reader experience.

Voice recognition is there out of the box in Vista, but Flash totally ‘de-supports’ voice recognition, so it can’t read the screen and you can’t use the voice recognition to choose a link and click it.

Hearing impairment and cognitive difficulties, e.g. dyslexia, learning disabilities, language difficulties (.e.g second language). With UGC there’s a lot of lack of awareness out there regarding the content they create. Shows a home page on MySpace and will see some prime examples of pages that are totally overloaded with graphics and totally inaccessible. Falls on the heads of the MySpace and Facebooks of the world to flag those requirements well, in registration process or the toolkit for creating the pages.

Finally, there was a problem with the Olympic logo promo video that caused epileptic fits. There are tools that assess how likely a video is going to cause problems.

AbilityNet.org.uk

Monday, September 24th, 2007

Abusing goodwill

Posted by Suw Charman-Anderson

I like to think that the world is based on goodwill. People are, generally speaking, nice and, by default, they will respect and help others. Certainly humans are fundamentally and inescapably social creatures that need each other on a minute-by-minute and day-to-day basis, and I think that being nice is one of the attributes that which fuels the reciprocation that makes helping someone else ultimately worth it for us ourselves.

I also think that the social web is an expression of the niceness that lubricates society. All the mores that have built up around blogging and wikis and sharing and Creative Commons are based on being nice: if you quote someone’s blog, it’s being nice to credit them; Wikipedia encourages everyone to be nice to newbies; sharing anything with strangers is an act of niceness in itself; and Creative Commons licences are predicated on the idea that people will be nice and respect them.

Whilst niceness isn’t universal - there are people who aren’t nice - it is a desirable attribute, so much so that niceness is taught and enforced from birth. I doubt there’s anyone reading this who wasn’t told as a child to “be nice” or to “play nicely”. Nice is good. We need nice.

This might explain why I get so cross when I come across examples of people, or especially businesses, not playing nice. But thanks to the internet, we now get to call out companies who, whilst sticking to the letter of the law (or Creative Commons licence), are flagrantly abusing its spirit.

First up, Virgin Mobile Australia. They found a photo of two American girls on Flickr, and decided to use part of it on billboard and online ads, with the taglines “Dump your pen friend” and “Free text virgin to virgin”. Alison Chang was the girl featured, and her family is now suing, saying that the ad “caused their teenage daughter grief and humiliation”, and listing both Virgin Mobile and Creative Commons as defendants.

The photo in question was shared on Flickr using an attribution licence, meaning that technically, it could be used by any company for commercial purposes without requiring permission from the photographer (although the licence has now been changed to “all rights reserved”). But there are legal issues around this use, because, despite the liberal reuse licence that was used, Australia requires model release forms to be signed before an image can be used in an advert. The original photo is still on Flickr, as is a photo of the billboard ad.

But what really stings about this is that it’s just not nice. Whether or not the CC licence allowed for commercial reuse, what Virgin Mobile and their PR companies - Host and The Glue Society, according to blog, Duncan’s Print - did was really unpleasant. There was absolutely no reason why they couldn’t have used stock photos for any ads that needed to feature people, but instead they whipped free photos off Flickr without giving a moment’s thought to the impact it might have. And Virgin Mobile Pty Ltd.’s response is absolutely disgraceful. The AP quotes them as saying:

Virgin Mobile Pty Ltd., the Australian company, released a statement saying the use of the photo is lawful and fits with Virgin’s image.

“The images have been featured within the positive spirit of the Creative Commons Agreement, a legal framework voluntarily chosen by the photographers,” the statement said. “It allows for their photographs to be used for a variety of purposes, including commercial activities.”

The “positive spirit” of Creative Commons is about constructive reuse, and this cocky attitude that they can take someone’s image and insult them publicly in the name of advertising is repulsive. Virgin and its PR company might not have broken the letter of copyright law, but they certainly showed no thought or consideration for Alison Chang.This sort of behaviour is just not nice, and Virgin should be castigated for it.
Now, on to Jo Jo, whose story is much more straightforward. Jo Jo writes about and photographs food on her blog Eat2Love, the trouble is, journalists keep lifting her ideas - both in terms of the things that she writes about and the way that she styles the food she photographs. Whilst this has been going on, according to her, since January, the straw that broke the camels back for her was seeing photographs that looked very much like hers on the cover of Gourmet magazine. And it’s not just Gourmet. In an email to me a couple of days ago, Jo Jo names another two publications and talks of a “major” website that poached her work.

Again, the journalists, photographers and editors who are lifing ideas from Jo Jo aren’t breaking the law. You cannot copyright ideas, and I think that’s a damn good thing, otherwise nothing would ever progress, but regularly poaching someone’s ideas without ever acknowledging how heavily your work is influenced by them, or without building something original on top of their idea, isn’t a very nice thing to do. Journalists and photographers get paid for their creativity, and nicking someone else’s is a cheap shot.

I know people who would probably respond to this by saying “Well, tough - that’s how it goes when you put your stuff online for free, and you just have to suck it up,” but the sad thing is that it forces a binary decision to be made. Either Jo Jo puts up with being constantly ripped off, or she stops blogging. She decided to at the very least cut back on blogging - she’s written just two posts in the last two months, and has removed much of her archive:

90 % of the articles on this blog have been removed from view. what you are viewing are my write-ups of a few food events, and some restaurants.

I think that’s a real shame.

I have real sympathy for Jo Jo. I remember when I was a budding music journalist trying to get a commission from a very high-profile glossy music magazine. I was asked to fax them five different feature ideas, which I did. I was fobbed off by the editor with some feeble excuse as to why my ideas were no good, only to see a few months later one of them written up by someone else. Could I prove that it was my idea? No, I couldn’t, but it was distinctive enough that it pretty clearly was my idea. And that was really galling - I felt like I’d been played for a fool, and it was this sort of shitty behaviour that, along with the shitty pay, drove me away from music journalism.

Now, I think there’s a different thing going on when people release under Creative Commons, and make the choice to let others reuse their work, or when you can see a professional benefit from seeing your stuff redistributed by other people. But one of the main tenets of Creative Commons is attribution, saying where you got stuff from. When someone poaches ideas and doesn’t admit that they weren’t being original, that’s unacceptable.

The flip side is that it’s easier and easier to find out who is ripping whom off, and who’s not playing nice. Companies are going to have to learn that it’s just not worth their while being the schoolground shark that tricks the other kids out of their pocket money, because they are going to get found out. Even monkeys have a sense of what is fair play, and in the blogosphere, this innate sense is getting honed to a sharp point.

So my advice to any business intending to take advantage of all that lovely free content out there? Play nice.

Thursday, July 26th, 2007

New health fears over big surge in misleading and irresponsible science reporting

Posted by Suw Charman-Anderson

As soon as I saw the news that Dr Andrew Wakefield, the doctor who first alleged that there was a link between the MMR (measles, mumps and rubella) vaccine and autism, was to be brought before the General Medical Council on charges of professional misconduct, I knew that there’d be a media feeding frenzy. Despite lots of evidence that the MMR vaccine is safe and a distinct lack of evidence that there is any link between MMR and autism, journalists from every corner of the media insist on writing stories that lead the public to believe quite the opposite.

As the misconduct story broke, I saw stories on both ITV’s morning show GMTV and on the BBC, which managed to paint Wakefield as some sort of misunderstood hero and imply both that the link between MMR and autism was real, and that the ‘establishment’ was working to deliberately mislead the public. Both broadcasters used the same ‘reporting’ tactic - to interview the parents of autistic children, (along with the autistic children themselves and their non-autistic older brother, on GMTV), giving them the opportunity to promulgate their beliefs for five minutes, whilst a GP was given two or three sentences in which to respond. The last word, on GMTV at least, was given to the parents.

The pieces were incredibly biased, pitting beliefs against evidence, with the presenter clearly coming down on the side of the parents and, to all intents and purposes, dismissing the evidence and views of the medical experts out of hand.

This, by itself, is appalling. Beliefs are not evidence. Nor is suffering. No matter how much sympathy I have for children and adults with autism, symptoms by themselves are not evidence of the cause of those symptoms. And the fact that people are suffering these symptoms should not be interpreted as proof that studies finding no link between MMR and autism are ipso facto wrong. Believing things does not make them true - science is not some sort of Secret where the power of the mind can change reality.

What is true is that the media have exploited the beliefs of those who are suffering, and in doing so have denigrating the work of many respectable, honourable and diligent scientists in order to create outrage, because outrage sells. They have portrayed the flawed work of a minority of doctors - now charged with acting unethically and dishonestly - as David to the rest of the medical world’s Goliath, purely so that they can profit from covering the manufactured conflict.

Things got even worse on the 8th July when The Observer’s Denis Campbell wrote an article entitled “New health fears over big surge in autism”. The original article has been removed from The Observer website (i.e. Guardian Unlimited), so if you click that link all you’ll get is a 404 page, but the whole thing has been posted in the comments of Ben Goldacre’s blog, Bad Science. The chances are that the article has been pulled for legal reasons, but I’m getting ahead of myself.

Read the rest of this entry »

Wednesday, November 8th, 2006

The democratisation of everything and the curators who will save our collective ass

Posted by Suw Charman-Anderson

Over the last few years we’ve seen old barriers to creativity coming down, one after the other. New technologies and services makes it trivial to publish text, whether by blog or by print-on-demand. Digital photography has democratised a previously expensive hobby. And we’re seeing the barriers to movie-making crumble, with affordable high-quality cameras and video hosting provided by YouTube or Google Video and their ilk.

Music making has long been easy for anyone to engage in, but technology has made high-quality recording possible without specialised equipment, and the internet has revolutionised distribution, drastically disintermediating the music industry.

Even sculpture is going to succumb, as Second Life residents can create complex avatars and then have them 3D printed into a physical item. It’s early days now, but it’s not going to be long before you can create any shape you like and have it printed, allowing anyone to become a sculptor without ever having to deal with physical materials.

What’s left? Software maybe? Or maybe not.

If you read my personal blog, Chocolate and Vodka, you’ll know that I’m learning Ruby on Rails. Ruby is a programming language, and Rails is a programming framework. The way it works is that you set up your database, and then you ask Rails to, say, create your input form, and it writes the Ruby and the HTML you need in order to create a web page that allows you to input data into your database. I have very little ability when it comes to programming, but I am learning Ruby on Rails and I see no reason why I can’t start creating my own web-based applications within the next few months.

Like 3D printing, this is just the beginning. Ning and Coghead are attempts to make web app development easier, but as they, and RoR, evolve we’re going to see people with no programming skills able to make their own web apps without ever having to learn a line of code.

The future is going to contain lots of small, agile development projects, and I’m not the only one who thinks this. Evan Williams recently wrote about what he calls the Obvious model for building and running web products:

The Obvious model goes something like this:

* Build things cheaply and rapidly by keeping teams small and self-organized.

* Leverage technology, know-how, and infrastructure across products (but brand them separately, so they’re focused and easy to understand)

* Use the aggregate attention and user base of the network to gain traction for new services faster than they could gain awareness independently

evhead: The Birth of Obvious Corp.

Hosting is affordable; Google’s AdSense makes raising revenue from ads simple to set up (which doesn’t necessarily mean you’ll get much revenue, mind); and blogs make it easier to promote your app. Just like every other area of human creativity, the barriers are coming down.

I was at a ‘future of…’ session the other week, and one of the trends I suggested was important was ‘the ubiquity of everything’. My fellow brainstormers didn’t seem to agree with the word ‘everything’, but I think we are moving towards a world where the only things that are rare are certain physical resources, and attention.

We already have more movies available than any one person can watch; more videos on YouTube; more blogs; more podcasts; more internet radio; more books; more software; more web apps; more games; more everything. It’s not like we’re starting from a point of scarcity here. And the flood of stuff is going to turn into a rampaging torrent as more people get online and more people get excited by their ability to participate and create.

In the past, the media acted as gatekeepers. They were the ones that went to the movie previews and told us which ones were good or crap. They were the ones who went to all the gigs and told us which bands were cool or rubbish. They were the ones who got the advance copy of the game and told us whether it was playable or tedious. They were the arbiters of taste, the people in the know, the ones with the connections needed to get at culture before us plebs got at it.

But we don’t need gatekeepers anymore. We don’t need people who stand between us and our stuff, deciding what to tell us about and what to ignore. We don’t need arbiters of taste. There are so many blogs out there reviewing software and web apps and films and books and every other sort of creativity that we don’t need to rely on the media’s old gatekeepers telling us what we should like.

We do, however, still need help. There’s just too much stuff around for us to know what’s out there, to keep up with what’s good, what works for us, what is worth investigation. What we need are curators. And we need them badly.

We need people who can gather together the things that are of interest to us, things that fit with our tastes or challenge us in interesting ways, things that enrich our lives and help us enjoy our time rather than waste it on searching.

Curators already exist. Some are people: Bloggers who sift through tonnes of stuff in order to highlight what they like, and who, if you have the same taste as them, can be invaluable to discovering new things to like. Some are aggregators: Site that gather lots of little bits of stuff and present them in aggregation and help us find the bits that the majority find to be good. Some are algorithms: recommendation systems and search.

But curation of the web has barely started. Much of what you could call curation that exists today is flawed: too many noisy opinions and not enough capacity to understand what I as an individual want; recommendation algorithms that produce seemingly random results; and the problem of ‘popularity begetting popularity’.

The great challenge for us, and the web, going forward is no longer breaking down the barriers to creation, it’s finding our way through the huge amounts of creativity that’s resulted.

Sunday, June 4th, 2006

Comment is F**ked

Posted by Kevin Anderson

First off, I want to say that I really admire the ambition of the Guardian Unlimited’s Comment is Free. It is one of the boldest statements made by any media company that participation needs to be central to a radical revamp of traditional content strategies.

As Steve Yelvington said this week:

Editors, please listen. If you’re not rethinking your entire content strategy around participative principles, you’re placing your future at risk.

The Guardian seems to understand this need for participation to be integrated with its traditional content, but as with many media companies: “The future is here. It’s just not widely distributed yet.” It is, therfore, not hugely surprising to find that Comment is Free is having a few teething troubles. Ben Hammersley, European alpha geek and one of the people behind CiF, knew there would be risks:

Perhaps the most prominent liberal newspaper in the anglophone world, opening a weblog for comment and opinion, with free and open user commenting is, to put it mildly, asking for trouble. … This means that we have to employ a whole combination of technological and social countermeasures to make sure that the handful of trolls do not, as they say, ruin it for the rest of us. Frankly, it gives me the fear.

Ben was right to be concerned. Honestly, I wish there were a clearer headed assessment of the risks involved with blogging by media companies. Don’t get me wrong. I think that media companies should blog, but the risks aren’t as simple as they may appear and something on the scale of CiF is of course going to have problems. The Guardian appear to have focused mainly on the risks posed by commenters and have put a lot of energy into figuring out how they can have open comments without falling foul of UK libel law.

But people are people, and you are bound to get abusive, rude or irrelevant comments. Any publicly commentable website will reflect the cross-section of society that reads it, so it’s inevitable that some comments will not be as civil and insightful as we would prefer. Trolls happen.

Just this week, Engadget had to temporarily shut off its comments “because of the unacceptable level of noise / spam / junk / flaming / rudeness going on throughout our boards”.

Where the Guardian has fallen over is in their assessment of the risks posed by their choice of columnist to blog on CiF. Rather than thinking about who would make a really good blogger, they seem to have made the same mistake as the rest of the big media who have tried their hand at blogging: They’ve given their biggest names blogs, despite the fact that these people have no idea how to. Now a bit of a tiff has kicked off between the Guardian’s stable of columnists, the commenters on Comment is Free and the bloggers there. (Thanks to my colleague Nick Reynolds at the BBC who blogged about this internally and brought it to my attention.)

Catherine Bennett writes a column so full of uninformed generalisations about blogging in the UK, specifically political blogging, as to completely lack credibility. She seems to be trying to discredit the Euston Manifesto, a net-born political movement in the UK, by painting it as the creation of a sexually obsessed, semi-literate male-dominated blogging clique. I’ll leave it to you to follow the link to the Manifesto and draw your own conclusions.

Another Guardian columnist, Jackie Ashley, defends professional columnists, and says: “To those of you who think you know more than I do, I’m eager to hear the arguments: just don’t call me a fucking stupid cow.” Polly Toynbee asks commentors: “Who are you all? Why don’t you stop hiding behind your pseudonyms and tell us about yourselves?”

Ms Toynbee why don’t you step out from behind your byline and tell us a little about yourself instead of belittling us? It’s usually worked for me when trying to dampen an online flame war.

I’m sitting here reading her column, and I really don’t understand how she expected this to put out the fires. She asks for civility and for people to tell us who they are, but then she says of one of her anonymous detractors:

What do you do all day, MrPikeBishop, that you have time to spend your life on this site? I suppose the answer may be that you are a paraplegic typing with one toe and then I shall feel guilty at picking you out as one particular persecutor.

What do you expect when you respond to ad hominem attacks with patronising ad hominem attacks? Do you really see this as a solution? Are you treating your audience with the kind of respect that you for some reason think you deserve by default?

Ms Toynbee professes to answer her many e-mails, but I do get the sense that the Guardian’s columnists are simply not used to this kind of medium, they are not used to getting feedback in public where they can’t just hit ‘delete’ to get rid of a pesky critic.

Suw - who I should inform Ms Bennett is female and blogs, thank you very much - likened such old school thinking to this:

It’s like them walking into a pub, making their pronouncements and then walking out. Later, they are shocked to find out that everyone is calling them a wanker.

An interesting comment on CiF from altrui May 18, 2006 12:04 (I can’t link directly to the comment):

One observation - those who respond to commenters tend not to be abused so much. There is a certain accountability required among political commentators, just as there is for politicians. Until now, opinion formers have never really had to justify themselves. I can think of many of the commentariat who write provocative and incendiary pieces which cause no end of trouble, yet they carry on stoking up argument and division, without censure or even a requirement to explain themselves.

Two issues here: Columnists are not used to engaging in conversation with their readers; and the readers have had years to build up contempt of specific writers and are now being given the opportunity to revile them in public. A lethal combination of arrogance and pent-up frustration - no wonder CiF has soured. Question is, can the Guardian columnists learn from their mistakes and pull it back from the brink?

A few suggestions. Don’t treat your audience as the enemy. If you’re going to talk down to your audience, they are going to shout back. And quite honestly, I would say to any media organisation that your best columnists and commentators don’t necessarily make the best bloggers. Most media organisations thinkg blogging is simply snarky columns. Wrong, wrong and wrong.

It’s a distributed conversation. Ms Ashley says: “As with child bullies, I wonder if these anonymous commenters and correspondents would really be quite so “brave” if they were having a face to face conversation.” You’re right, and I am in no way defending some of the toxic comments that you’re receiving. But step back. Read your column as if it were one side of a conversation and think how you would respond.

Many columnists seem to use the British public school debating trick that really is a form of elitist trash-talking. Belittle your opponents as much as possible. Most will lose their heads, and therefore the argument. But, again, step back. Would you ever address someone face-to-face in the patronising manner of your columns and honestly expect anything approaching a civil response? It seems that your debating strategy has worked all too well, and your audience is so angry that they are responding merely with profanity and vitriol.

Again, having said all of that, I’m glad that the Guardian aren’t letting growing pains stop them. They are choosing one of their best CiF commenters to become a CiF blogger. Bravo.