Ada Lovelace Day

About The Authors

Suw Charman-Anderson

Suw Charman-Anderson

Suw Charman-Anderson is a social software consultant and writer who specialises in the use of blogs and wikis behind the firewall. With a background in journalism, publishing and web design, Suw is now one of the UK’s best known bloggers, frequently speaking at conferences and seminars.

Her personal blog is Chocolate and Vodka, and yes, she’s married to Kevin.

Email Suw

Kevin Anderson

Kevin Anderson

Kevin Anderson is a freelance journalist and digital strategist with more than a decade of experience with the BBC and the Guardian. He has been a digital journalist since 1996 with experience in radio, television, print and the web. As a journalist, he uses blogs, social networks, Web 2.0 tools and mobile technology to break news, to engage with audiences and tell the story behind the headlines in multiple media and on multiple platforms.

From 2009-2010, he was the digital research editor at The Guardian where he focused on evaluating and adapting digital innovations to support The Guardian’s world-class journalism. He joined The Guardian in September 2006 as their first blogs editor after 8 years with the BBC working across the web, television and radio. He joined the BBC in 1998 to become their first online journalist outside of the UK, working as the Washington correspondent for BBCNews.com.

And, yes, he’s married to Suw.

E-mail Kevin.

Member of the Media 2.0 Workgroup
Dark Blogs Case Study

Case Study 01 - A European Pharmaceutical Group

Find out how a large pharma company uses dark blogs (behind the firewall) to gather and disseminate competitive intelligence material.


free page hit counter



hit counter script


All content © Kevin Anderson and/or Suw Charman

Interview series:
at the FASTforward blog. Amongst them: John Hagel, David Weinberger, JP Rangaswami, Don Tapscott, and many more!

Corante Blog

Thursday, May 18th, 2006

Xtech 2006: Paul Hammond - An open (data) can of worms

Posted by Suw Charman-Anderson

Used to work for the BBC, but left three weeks ago, so can’t talk too much about them. Started working for Yahoo! two weeks ago, lots of APIs at the Developer Network. But can’t really talk about that because he’s only been there two weeks.

Ideas he wants to talk about are his personal experience, and experiences of his friends which they’ve told him in confidence, so can’t talk about that either. So this talk will be not as detailed as would have liked.

Open data. BBC and Yahoo! both understand the benefits of open data. Both have made statements about the importance of open data. Both aim to make as much data available as possible. And there are restrictions on the use of those data.

People know that BBC and Yahoo! are opening up their data, because it’s still relatively rare. So when a new company does it, everyone gets excited. So wanted to see how much data there really is.

List of open APIs at www.programmableweb.com/apilist and it’s a fairly good list, but missing a few bits and pieces. It had 201 APIs listed, and they are all on one page. One quarter of APIs listed are from 7 companies

Yahoo!

Google

Amazon

MS

Ebay

AOL

Plus one I missed.

Most of the companies are new, only 14 APIs from companies more than 20 years old. The big old companies are big, and they’ve collected a lot of useful data that we could do interesting things with that it’s not available.

So everyone in our tech bubble think open data is a good idea, but hardly anyeone is doing it. So if open data is such a good idea, why isn’t there more of it? Don’t care about the format of the data.

Haven’t mentioned RSS/Atom. There are millions of RSS feeds, but these highlight the problems even more. You can now get RSS feeds for almost anything you want, but try getting in depth sports statistics, or updated stock market data, or flight times. You can’t get it. RSS is intended to be read in an aggregator, and most of it can’t be reused or republished.

So you can get any data you want from the net, so long as it’s the last 10 items on an RSS feed, and you don’t what to do anything with it.

Why are people happy to put some data out, but not others. Do the tech and standards need to be better? Yes, they are not perfect but they never are. Simple things like character encoding are very easy to get wrong. Definitions are difficult.

But they are good enough. Standards have been developed because there’s a real need to use this stuff behind the firewall. RSS is popular, and most of it is not perfect, but it’s good enough.

So if it’s not the tech, it must be something else. But there’s a simple reason. Organisations don’t do anything unless they think it is in their best interests. A company won’t do anything unless it makes money, so maybe companies don’t think its worthwhile. That means either:

They’re right.

or

They’re wrong.

Either could be correct. But more important is to understand their reasons.

Most companies don’t know what an API is. If they don’t understand the concept of releasing their data online, then standards won’t matter. Explaining the concept of an API is hard when you are talking to people who don’t know how computers work.

People are starting to learn about RSS. They understand that if they use RSS they don’t need to visit the site. But to use it you do need to know a little bit about it. However, it fits in to an existing business model - it drives interest and visitors to their site. Is in a positive feedback loop because the more RSS there is, the more you see it, the more likely people are going to use it.

So assuming the companies knows what an API is…

Most companies make money from their data. So they will say ‘why give it away?’. For some you can explain why it’s good - for a public broadcaster you can say ‘we’ve paid for it already’. For some companies there are reasons - improves branding, etc. - but it’s a risk.

For most companies, they want competitive advantage. So if a competitor has opened up then you have to open up to keep up.

If you sell data and then you start giving it away it reduces the perceived data. If you sell it for tens of thousands of pounds, then why are you giving it away? Gets into a downward spiral as to what that data is worth.

Opening up data is risky - risk losing money that you’re making. Could argue that they are wrong, but not sure that they are.

Many companies are not allowed to open up, even if they want to.

Lawyers say no. Most companies don’t have complete rights over the data they used. So stock prices on the evening news don’t come from the broadcaster, it’s bought in. Google don’t create their own map data, they buy it from someone like Navteq. It’s cheaper that way. Data provider has economies of scale. Also waste of time to do it yourself. some companies also act as middlemen between groups, e.g. travel agents ticket bookings and Sabre and the airlines. Companies outsource things. Then there are exclusivity issues.

So even if they wanted to, some companies are contractually prohibited to share their data.

Look at Google Map mash-ups. Google get their map data from NavTeq, but the data used in the Google API is from Tele Atlas. Have to be determined to do this. Might also cost you more money.

Finally, the general public wouldn’t always like it. Personal data, for example.

It’s nice to have. But the benefits are second order. So people label it as low priority.

Once you have an API it will be missing features.

So what should we do?

Not sending emails demanding and API. That just makes you look like a moron.

But… what you can do

1. Be aware of the problems

2. Demonstrate usefulness, screen scrape if you need to, but don’t get yourself cease-and-desisted

3. Don’t assume it’s a technology problem

4. Target the right people, find someone on the inside who can help you

5. Talk about benefits to the provider, not the consumer. If you talk about the benefits to you, they’ll see you just as someone who wants something for free.

6. Have patience. It is getting better every day, and it takes time for business to come round.

Wednesday, May 17th, 2006

Xtech 2006: Steven Pemberton - The power of declarative thinking

Posted by Suw Charman-Anderson

Sapir-Whorf Hypothesis. Connection between thought and language, if you haven’t got a word for it you can’t think it. If you don’t percieve it asw a concept, you won’t invent a word for it. For example: Dutch ‘gezellig’ [or Welsh 'hiraeth'].

The Deeper Meaning of Liff: A dictionary of things there aren’t any words for yet but there ought to be.

Example, Peoria (n.): the fear of peeling too few potatoes.

Web examples, AJAX, blog, microformats, Web 2.0. These are words that let us talk about things, they create the concept for us so we can talk about them, even though the thing existed before. They also signal the success of work that has gone on in the past.

There’s little in AJAX that wasn’t there from the start. Blogs have really been around since 95.

What needs a name? Think about concepts that needs names (which the Saphir-Whorf Hypothesis doesn’t allow us to do).

E.g. the sort of website that is like CSS Zen Garden wherein the HTML has been sliced straight off from the CSS. Another example, is using SVG to render data.

Other things that need to be Whorfed in the future:

- layering semantics over viewalb econtent like microformats, RDF/A, making the semantic web more palatable for the web author.

- webapps using decorative markup.

Moores law and an exponential world. Computers very powerful now. His new computer is a dual-core, which means his computer is twice as idle as it was before. Why aren’t we using best use of this power?

A declarative approach puts the work in the computer, not on the human’s shoulders.

Software versions not so much of an issue these days, but devices are. Lots and lots of devices. Also diversity of users. We are all visually impaired at some point or another, specially with tiny fonts on powerpoint slides, so designing for accessibility is designing for our future selves. It’s essential.

Google is a blind users, it sees what a blind user sees. If your site is accessible, Google will see more too.

Want ease of use, device independence, accessibility.

Bugs increase with complexity. A program that is 10 times longer has 32 times the bugs. But most code in most programmes has nothing to do with what the programme should achieve.

However, declarative programming cuts the crap. Javascript, for example, falls over if it gets too long, and declarative programming could replace it and make the computer do the hard stuff without it cluttering up the code. It makes it easier by removing the administrative details that you don’t want to mess about with anyway, so if you let the computer do it then you can remove a lot of this code. So the declarative mark-up is the only bit produced by the human.

Wednesday, May 17th, 2006

Xtech 2006: Tristan Ferne - Chopping Up Radio

Posted by Suw Charman-Anderson

Finding things when your content is audio is hard, and BBC has a lot of audio content. So need to use metadata, so have info about whole programmes. Don’t have data about how these programmes can be chopped up, e.g.

- news stories

- magazine programmes

- interviews

- music tracks

Acquiring metadata about programmes:

- in production process, either people or software, pre-broadcast

- media analysis of what is broadcast

- user annotation

Focusing on user annotation, which is the Annotatable Audio project. Aim is to get listeners to divide programmes into segments and to annotate and tag each bit. Demonstrated a pilot internally, and preparing for a live deployment.

Can annotate the audio by selecting segments (like ‘notes’ in Flickr) and add factual notes. Are thinking about adding comments about whether or not people like stuff. Wiki-like.

Intending to launch around a low-profile programme, probably factual so they promote the annotation angle, not the discussion angle. Users will need to log in to annotate, but any user can see the canonical version.

Will be able to then search within the programme, to generate chapterised podcasts, and also want to support chapterised MP3s.

Looking at using it as an internal tool for production staff, e.g. tracklisting for specialist music shows or live sessions where the tracklisting can’t be pulled off of a CD.

Can add in tags and then pull out related Flickr photos, which can work nicely but sometimes doesn’t.

Could be used for syndication, so people could more easily use a section or segment of a programme using a ‘blog this’ button on the interface which creates a Flash interface you can put on your site. Problems with editorial policy on that, but it’s an aspiration for their department.

Regarding licensing, will initially be doing it with audio that there are not licensing issues for, which is either rights-free or for which the BBC has the rights.

Wednesday, May 17th, 2006

Xtech 2006: Tom Loosemore - Treating Digital Broadcast As Just Another API, and other such ruminations

Posted by Suw Charman-Anderson

Going to tell us a story about Mr Baird, Mr Moore, and Mr Berners-Lee.

10-15 years ago, Mr Baird ruled the roost, but we know about TV and what makes great TV is great programmes, fabulous stories fabulously told. Mr Moore then came a long and said our chips will get faster, our kit will get smaller, and his corollary, that disks will just keep getting bigger. That was 30 years ago. 15 years ago Mr Berners-lee populated the net, and said the ‘internet is made of people’.

10 years ago, Mr Loosemore started working for Wired in the UK as a journalist before they went bust. One of his jobs was to keep abreast of Moore’s Law, as the editor wanted to do a monthly feature on costs and size of computing equipment. Recently found a spreadsheet from 95 charting ISP costs and it was really expensive. In 95 everything was analogue - TV, satellite, cable.

Then in 98 Mr Murdoch gave away digital set-top boxes, and it cost £2billion, but the market thought he was nuts. News International nearly cost him the business. But he saw that it was an essential move, because it gave him more bandwidth. In the UK in 95 you had 4 maybe 5 channels, but when Murdoch went with his set-top box, you had hundreds.

Then digital terrestrial started, which was rubbish, but then taken over by the BBC and you can have about 30 free channels.

Doesn’t look at digital broadcasting the same way that everyone else does. Sees it as a way of distributing 1s and 0s. Doesn’t see it as programmes, but as data.

Lots of different standards and formats.

Also live P2P being used to stream live TV.

Focus on Freeview, and view it as an API.

Expect from an API:

- rich, i.e. interesting. 30 TV channels and a bunch of radio is rich.

- open. Freeview is an unencrypted

- well structured. in theory Freeview is

- scalable

- very high availability, it doesn’t fall over

- accessible

- proven

Doesn’t do so well:

- licence? licence is domestic and personal, so do what you will so long as it is domestic and personal.

- documented? Theoretically, dtg.org.uk. But the documentation is copyrighted and managed by Digital Television Group, so have to be a member before you can get the documentation.

But it’s not hard to reverse engineer, so you can see where the broadcasters are adhering to the standards and where they are being a bit naughty.

Five years ago, Freeview is just taking off in the UK, but other stuff also going on.

There’s a lot of data, 2mpbs MPEG2, 2GB storage per day, 50gb per channel per day, so a terabyte will store 4 channels for a week. But linear TV is a bad way to distribute stuff - most of the time you miss most of the stuff. So what if we just record everything.

So, colleagues built a box to store entire broadcast from the BBC for a week. 2.3 terabytes of storage. About 1000 programmes. Had it for about three weeks. When you’ve got that much choice, existing TV interfaces like the grid layout don’t work. Too much data.

Broadcast metadata alongside the programmes, and the BBC have created an API for that metadata, Got 18 months worth of programme metadata, and got Phil Gifford turned it into a website. Got genre data, but that’s pretty useless when you have 100,000 programmes and it’s not help finding stuff you like.

But if you show people stuff that people are in, say programmes with Caroline Quentin, that’s helpful. Mood data was about as useful as genre, but associate it with people it becomes interesting.

Then discovered the BBC Programme Catalogue. Wonderfully well structured data model, and amazing how disciplined they had been in keeping their vocabularies consistent. So Matt Biddulph put it online, and the crucial thing is that everything is a feed - RDF, FOAF etc.

But that’s only the metadata. Where are the programmes?

So, 12 TB stores all BBC TV for 6 months, and that’s a lot of programmes. But what happens when you give people that amount of content? Can’t make it public, but can make it available to BBC staff, who have to watch a lot of TV in order to do their job. Built an internal pilot, the Archive Testbed, which is no longer live. Took the learning from the metadata only prototype and found a few things.

Keep the channel data. Channels are useful and throwing them away too soon cost them. Channel brands are more than just a navigational paradigm, they are a kite mark of different types of programme. So some programmes scream ‘BBC 2′, for example.

Give people all the metadata, all of which came from external broadcast sources, not internal databases.

Added ratings and comments, links to blog posts, bit of social scheduling - what are my friends watching? What do people recommend? If I don’t know what I want, I want other people to tell me.

Was fantastic, but had to limit it to a couple of hundred people within the BBC. Was a bit too popular for their own good.

In the R&D department, a couple of them worked on a project called Kamaelia to create framework to plug together components for network applications and about six months ago, persuaded them they needed a project for that framework and so applied it to this.

Hopefully will make the project very successful. Now BBC Macro has been released as a pilot. Will be eventually everywhere.

Wednesday, May 17th, 2006

Xtech 2006: Roland Alton-Scheidl - StreamOnTheFly network

Posted by Suw Charman-Anderson

Reusing broadcast radio/video content. For small broadcasters who have little budget and who need to swap content. System is for the journalists not the listeners. Intelligent audio search and retrieval. Simple DRM mechanisms.

Small community radio stations, often have a stream online. StreamOnTheFly created a structure of nodes which exchange metadata but not the audio files. Each node carries content, metadata, classifications, stats and feedback. Portal provides way for people to follow what content is relevant to them.

[Note: This sounded like a really good idea, but I kinda lost my focus a bit, hence the short notes.]

Wednesday, May 17th, 2006

Xtech 2006: Di-Ann Eisnor - Collaborative Atlas: Post geopolitical boundaries

Posted by Suw Charman-Anderson

Platial, trying to help link people to people.

People have been mapping their lives, autobiogeography: where you were born, went to school, etc. They are mapping things of historical importance, e.g. ‘Women who changed the world’. Maps for hobbies and interests, e.g. bird-watchers and cat-lovers.

Over 4000 maps. Everything from food to activists to romantic encounters.

Has tags and comments. Can embed video.

When people get to ‘own’ places, geopolitical boundaries start to melt. Initial analysis. They looked at tags and found a social topography irrelevant to proximity or national borders. Correlated cities based on users, and some cities are gateways to other cities.

Some themes within the tags are universal from city to city, e.g. city names, coffee, restaurants, food, art and home.

Aggregating geodata in Placedb. Taking location point data such as geoRSS or geotagged data, or data that includes city or street names, and then apply comparative analysis algorithms to find the location of documents with no obvious location. So can collect Flickr pictures, Reuters stories, etc. for a specific area, e.g. your home town, and this is fed into Platial, e.g. the London page.

Need more geofeeds into Placedb.

Wednesday, May 17th, 2006

Xtech 2006: Matt Biddulph - Putting the BBC’s Programme Catalogue on Rails

Posted by Suw Charman-Anderson

Matt’s talking about the BBC’s experimental Programme Catalogue. It’s amazing. Absolutely great piece of work.

One million contributors, 1.1 million contributors, going back to the 1920s.

The BBC has some 80 year’s worth of archives, which have been catalogued. Only catalogued stuff that was archived, so no record of stuff that was broadcast but then never archived. The default format for archiving audio was, until the 80s, vinyl. They even have stuff on wax cylinder. And it was basically down to Matt Biddulph, with help from Ben Hammersley, to take their database and do something cool with it. So now you can search for anything and you’ll get as much information as possible, including:

- programmes

- xml

- tags

- by date

- search on keywords

- contributors

- feeds

So here’s the Dr Who search, which tells you how many programmes have been made, and when. And the episode page for New Earth, which includes broadcast details, a very ‘terse’ description, categories which the Beeb has been using to organise its categories for years, list of people involved and an RDF feed. So then I can search on any particular person and see what other entries they have, and can see a cool little graph showing frequency of appearances from 1930 to 2006.

There’s also the ability to do maps of who’s appeared with whom from the FOAF feed.

It’s linked into the rest of the web, for example, Wikipedia so that you can correlate data using the API at developer.yahoo.com.

The site was built in two months in Ruby on Rails. No ‘really good ideas, just really good practice’.

Matt is now going through some techie stuff regarding how he created the site.

Can do a fulltext search. People are spending significant amount of time on the site and people are linking to it not just because of the programmes, but because of all sorts of reason, perhaps talking about an event that happened in the 70s.

Built it really quickly, in two months. But deployment in the Beeb is difficult because the tech is dealt with by Siemens, so they had never seen Ruby, didn’t know about Rails, didn’t work on a fast turnaround. So five months after Matt delivered, the site was still not deployed properly.

The archives people really bought into the project, because their budget always gets cut ‘because they are librarians’. So then people high up picked up the word ‘metadata’, and the archive guys immediately said ‘we have metadata’. They got into the idea of making their stuff available.

Matt worked offsite, communicated by blog where he said what he was doing, and they would send him bugs. Had hardly any meetings, once every two or three weeks.

On the Web 2.0 meme map, does well. Very long-tail, because most of the stuff in the archives doesn’t get much attention. BBC must put its content online at some point, but they don’t have any way of knowing what people want. This gives them an idea of what people want.

Google searches coming in are people searching for really strange and obscure stuff, so there’s a lot of stuff in the database that isn’t anywhere else.

It is perpetual beta, and falls over a lot. Exactly one URL for everything on the site, and they are canonical identifiers. Not a beautiful user experience, but is very rich, very clickable.

[I have to say, this is what the BBC should be talking about and promoting. This is just the cat's whiskers, and the Beeb are missing a trick by not shouting about this from the rooftops.]

Wednesday, May 17th, 2006

Xtech 2006: Jeffrey McManus - Building a Participation Platform at Yahoo!

Posted by Suw Charman-Anderson

Yahoo! wants to open up to third-party developers.

Building a developer ecosystem requires:

- providing compelling products for developers, including APIs, documentation and a community

- disseminate information

- provide support

Mash-ups, not a useful term, won’t be talking about mash-ups in a year’s time, because it’ll just be normal to mix two or more sources of data. although Yahoo! did adopt the motto ‘mash-up or shut up’.

Creating communities are important.

Has included photos in slides from Flickr, which is of course a Yahoo! property.

Timeline:

- Feb 05, search APIs. Made the APIs more openly available

- May 05, Developer RSS Index; Music Engine Plug-ins.

- Jun 05, simple maps API

- Aug 05, comparison shopping API

- Nov 05, Flash maps API; AJAX maps API. No reason not to do both, so did both.

- Dec 05, trip finder API; Javascript Developer Center, JSON support.

- Jan 06, Open-source Javascript UI library and design patterns library. Design patterns library is Creative Commons licensed.

- Feb 06, PHP developer center, serialised PHP support

- Apr 06, updated maps API

- May 06, updated Javascript UI library

Had dozens of developers in June 2005, and now up tens of thousands, because they have opened up and provided APIs that people want.

Maps mashups. For a lot of US locations, Yahoo! has better resolution maps than others. US is hard for mapping but great for satellite.

Demos a site showing Bay Area mash-up of public transport stops and Yahoo! map, but done with no coding, it’s all XML.

Sponsored walk map for Walk America, using Flash API and geoRSS. Also Running Maps, which allows you to plot your route and it tells you how long it takes to walk somewhere.

Rollyo, allows you to create customised searches amongst specific sites, as a ’search roll’.

Flickr. Related Tag Browser, you type in a tag and it tells you which related tags have been used on recently uploaded photos. Also a Flickr friend network visualisation tool.

Yahoo! Widgets - tiny apps for your desktop. Used to be Konfabulator, and made the product free. Easy to create, and can crack open the widgets to see how they work.

Most third-party users of Yahoo! APIs do so under a non-commercial licence. Commercial exceptions are make on a one-off basis. Working to make it easier to obtain a commercial exception.

Note: I don’t think I’m going to blog all the really techie talks. If you want to see other people’s notes, then try PlanetXtech.

Wednesday, May 17th, 2006

Xtech 2006: Paul Graham - How American are Startups?

Posted by Suw Charman-Anderson

Sitting in the Grand Ballroom of the Hotel Grand Krapolinsky Krasnapolsky in central Amsterdam, at yet another conference. This is the third in three weeks, and I’ll be glad when next week comes and I don’t have to think about writing a presentation. Well, for a few weeks at least.

First up:

Paul Graham - How American are Startups?

I
‘m here to talk about start-ups. Well, if I was giving a talk about start-ups in the US, there’d be a lot more people here, so maybe that says something, or maybe I’m reading too much into it.

Could you recreate Silicon Valley elsewhere? With the right 10,000 people, yes. It used to be that geography was important, but now it’s having the right people. You need two kinds of people to create a start-up: rich people and nerds. Towns become start-up hubs when there are rich people and nerds. NYC could not be a start-up hub because there are lots of rich people but no nerds. Result: no start-ups. Pittsburgh has the opposite problem - lots of nerds, no rich people. Uni of Washington yeilded a hi-tech community in Seattle, but Pittsburgh has a problem with the weather, and no beautiful old city, so rich people don’t want to live in Pittsburgh.

Do you need rich people? Would it be best if the gov’t invested? no. You need rich people, because they tend to have experience and connections, and the fact that it’s their money makes them really pay attention. The idea of gov’t beaurocrats making start-up decisions is comic, it’d be like mathmaticians running Vogue… or editors of Vogue running a maths journal. Start-ups funded by burocrats would be competing with start-ups run by rich people who have their own money on the line.

Start-ups are people, not the buildings. Creating business parks doesn’t help start-ups, because start-ups do not use that kind of space. Where a start-up starts it stays, so all you need your three guys sitting round a kitchen table. If you can get rich people and nerds together, you can recreate Silicon Valley.

Smart people like smart people and will go where they are, so universities are good. First rate compsci depts are important to this, preferably one of the top handful in the world, and has to stand up to MIT and Stanford. Professors consider one factor only - they are attracted by good colleagues. So if you can attract the best people then you will create a chain reaction which would be unstoppable. Just takes about half a billion, which is within the reach of any developed country.

But you need a place where investors want to live and students want to live when they graduation. It needs to be a major air travel hub so people can travel easy. Investors and nerds have similar taste because most investors used to be nerds. Taste can’t be too different, but also can’t be too mainstream.

Like the rest of the creative class, nerds want to live somewhere with personality, that’s not mass produced. To create a start-up hub, you need a town that doesn’t have mass development of large tracts of land [so, no Milton Keynes then?]. Most personality is found in older towns. Pre-war apartments are built better, and people like them more.

You can’t build a Silicon Valley, you let it grow.

Any town’s personality needs to have a good nerd personality. Nerds like towns where people walk around smiling, so not LA because people don’t walk around, or NYC where people don’t smile.

Nerds will pay a premium to live where there are smart people. They like quiet, sunlight, hiking. A nerd’s idea of paradise is Berkeley or Boulder. The start-up hubs in the US are very young-feeling, but not new towns. Want a place that tolerates oddness. Get an election map and avoid the red bits.

To attract the young, you need a city with a live centre. None of the start-up hubs have been turned inside out like some Americans have. Young people do not want to live in suburbs.

Within the US, Boulder and Portland have the most potential. They are both only a great university short of being a start-up hub.

The US has some significant advantages:

1. Allows immigration. Would be impossible to reproduce Silicon Valley in Japan, because most of the people there speak with accents and the Japanese don’t allow immigration. It has to be a Mecca, and you can’t have a Mecca if you don’t let people in.

2. Won’t work in a poor company. India might one day produce a Silicon Valley, because it has the right people but it’s still very poor. US has never been as poor as some countries are now, so we have no data to say how you get from poor to Silicon Valley. There may be a ’speed limit’ to the evolution of an economy.

3. The US is not (yet) a police state. China might want to create a Silicon Valley, but their tradition of an autocratic central gov’t goes back a long way. Gets you efficiency but not imagination. Can build things, but not sure it can design things. Hard to have new ideas about tech without having new ideas about politics, and many new tech ideas do have political implications so if you squash dissent you squash new ideas. Singapore suffers a similar problem.

4. Need really good unis, and their US has those. Outside of the US, people think of Cambridge in the UK, then pause. The best professors seem to be all spread out, instead of concentrated, so that hinders them because they don’t have good colleagues, and their institutions don’t act as a Mecca.

Germany used to have the best universities, until the 30s. If you took all the Jews out of any university in the US, there’d be a huge hole, so as there are few Jews in Germany, perhaps that would be a lost cause.

5. You can fire people. One of the biggest obsticles are the rigid labour laws in Europe, which is bad for start-ups because they have the least ability to deal with the beurocracy. Start-ups need to be able to fire people because they need to be flexible. EU public opinion will tolerate people being fired in industries where they care about performance, but that seems limited to football.

6. Work is less identified by employment in the US. EU has the attitude that the employer should protect the employee. Employment has shed these paternalistic overtones, which makes it easier for start-ups to hire people, and easier to start start-ups. In the US, most people still think they need to get a job, but the less you identify work with employment the easier it is to start your own company. All you have to do is imagine it.

A year after the founding of Apple, long after they had sold their stuff, Steve Wozniak was still working for HP. When Jobs found someone to give them venture funding, on the condition that Woz quit, he initially refused.

7. America is not too fussy. If there are any laws regarding businesses, you can assume that start-ups will break them because they don’t know them and don’t care. They get run out of places that they shouldn’t be run out of, like garages or apartments. Try that in Switzerland and you’d likely get reported. To get start-ups you need the just right amount of regulation.

8. US has a huge domestic market. Start-ups usually begin by selling locally, which works because their is a huge domestic market. In Sweden, for example, the market is smaller. The EU was created to form an international market, but everyone still speaks different languages. But it seems as if everyone educated person in Europe now speaks English, if present trends continue, more will.

9. Funding. Start-up funding doesn’t only come from VCs, but also from business angels. Google might not have got where they were without angel funding of $100k. All you need to do to get that process started is to get a few start-ups going, but the cycle is slow. It takes five years for a successful start-up to produce a potential angel investors. You need angels as well as VCs.

10. America has a more relaxed attitude to careers. In the EU, the idea is that everyone has a specific occupation, but in the US things are more haphazard. That’s a good thing. A start-up founder is not the type of career a high-school kid will choose - they choose conservatively. Start-ups are not things you plan, so you are more likely to get them in a society that allows career changes on the fly.

Compsci was supposed to provide researchers, but in actual fact most students are there because they are curious.

Americas schools might be a benefit - they are so bad, that people wait til college before they make their decisions about what their career will be.

This list is not meant to suggest America is the best place for start-ups. It should be possible not to duplicate, but improve on Silicon Valley. what’s wrong with SV?

- It’s too far away from San Francisco. You either live in the Valley, or commute from SF. Would be better if the Valley was actually interesting, but it’s the worst sort of strip development.

- Bad public transportation. There’s a train, which is not so bad by American standards but by Eu standards, its’ awful. So design a town for trains, bikes and walking and cars last, but it’ll be a long time before the US does that.

- Have lower capital gains taxes. Low income tax isn’t so much of an issue, but capital gains is. Lowering the tax instantly increases returns on stocks, as opposed to real estate, so to encourage start-ups, have low cap gains. But decreases in cap gains disproportionately favour the rich. Belgium as cap gains of 0.

- Smarter immigration policy. People running Silicon Valley are aware of the shortcomings of the US immigration system, and since 2001 it has become very paranoid. What fraction of the smart people who want to come to the Valley want to actually do get in? Half? The US policy keeps out most smart people and puts the majority in crap jobs. A country which got immigration right coudl become a Mecca for smart people simply by letting them in.

So basic recipe for a start-up hub is a great university and a nice town. You could improve on Silicon Valley easily. Just let people in.

Tuesday, May 16th, 2006

EBU: Wrapup

Posted by Suw Charman-Anderson

Last week, Kevin and I spent a couple of days in Warsaw, at the European Broadcast Union’s Radio News Specialised Meeting. Kevin has done a sterling job of taking notes from the sessions, (although I think he has a few sessions still to write up) so I don’t want to rehash what happened, but I did want to talk about a few highlights for me.

It was, for both of us I think, a really good conference. The delegates came from across Europe and were all either public broadcasters or freelances. The atmosphere in the meeting was welcoming and open, and although I am neither a radio journalist nor a public broadcaster myself, I was made to feel as if I had as valuable a contribution to make as anyone else there.

Probably the stand-out session for me was The Dart Center for Journalism & Trauma’s Mark Brayne, who gave a talk on how journalists and their employers deal with the aftermath of trauma, whether that be war, terrorism attacks or gruesome accidents. It’s very easy to forget, for those of us engaged in types of journalism that don’t require us to go out into the field, that there are people who end up having to deal with some harsh realities and the consequences of that are potentially life damaging. It was fascinating to me to see how the others in the room grappled with issues such as the ethics of sending a freelance into a war zone, and then to see the way that the freelance’s life is affected by those decisions.

My contribution was to the ‘citizen journalism’ panel, although by the end of our session we had pretty much all agreed that ‘citizen journalism’ is a divisive term and should be called something else. I prefer to use the phrase ‘participatory media’, because firstly it removes the implication that citizens and journalists are two different things, and secondly because it removes the erroneous concept that people engaging in these sorts of behaviours are trying to, or even want to be, journalists. It is also a more technologically agnostic phrase, not implying the written word as ‘journalism’ so often does. After all, much participatory media is photos, video or audio, so to think that it’s just blogging is seeing but a fraction of the story.

In a dramatic contrast to the WeMedia fiasco, the big media people in the room were really interested in different types of participatory media technologies such as blogs, podcasting, photosharing, videoblogging etc; in the different behaviours shown by those engaging in particpatory media; and in the different scales at which these sorts of projects can work. Vin Ray from the BBC’s College of Journalism asked me for the mindmap that I had thrown together for the session so here it is.

We also had some really good contributions from Holger Hank, Head of Multimedia at Deutsche Welle, who spoke about their blogs: a US election blog, one covering an assent of Mount Everest, and now some World Cup blogs, the most popular of which is the Spanish one. DW also run The Bobs - The Best of the Blogs - which gives awards to journalistic blogs in nine languages. One past winner was a Chinese blog about dogs, which was a subtle commentary on the way people are treated in China using dogs as metaphors. He also talked about how blogs are taken up differently by different cultures, for example, the way blogs are viewed in South America is very different form North America, and he explained how German blogs aren’t as original or self-confident as American ones.

Arthur Landwehr, Chief Editor at SWR and self-confessed ‘non-expert’ talked about the trends in blogging that he observed whilst working in the States. I’m not sure I agree with his assessment of US political blogs, but his discussion of religious blogs was fascinating. It seems that some churches in the States are seeing blogging and podcasting as a serious threat, because people are podcasting sermons and the congregation are listening on their commute to work and not coming to church. Ex-church goers, particularly rebel Mormons, are also using blogs to criticise their churches, having found a way to voice their opinions and talk about their experiences that they didn’t have before.

Rob Freeman, Head of Multimedia at the Press Association, demonstrated a number of mash-up sites, showing what you can do with crime statistics and Google Maps, for example.

It was a shame that we didn’t have very long for questions, but I got the sense that people were very curious about what could be achieved and how it could be done. Many of the delegates had not previously come across or thought about participatory media, so for them it was entirely new area. I wish I could have had time to demo NowPublic to them, but there simply wasn’t the chance.

After the end of the conference, which was nicely paced with not too many sessions and an adequately long lunch that involved a decent amount of food supplied by our gracious hosts, Polskie Radio, we had the opportunity to talk further. I had some relly great converations with several people, but the one that stands out was Urban Hamid. Whilst we were being shown around Warsaw on the guided tour that had been organised for us (and which was great fun) we had a really cool chat about Creative Commons and the benefits to freelances of releasing archival video or audio footage for others to reuse and remix.

I’ve had similar conversations with journalists before, and their reponse is often ‘But if I give my stuff away, how can I make a living out of it?’, but Urban immediately understood that by giving your stuff away under, say, a non-commercial licence, you can bring your work to the attention of a lot more people, and thus get yourself more work. It was a real pleasure to talk to him, and I am hoping soon to hear about his new CC-licenced video archive!

Overall, I came away feeling relieved that not everyone in the media is as clueless and small-minded as those whose ‘leadership’ we were subjected to two weeks go. There has been created, by a small group of press and bloggers, a false sense of antagonism between the two camps, yet if you were looking for that tension last week in Warsaw, you would not have found it. Indeed, so much did I enjoy talking with the journalists I met that the EBU conference turned into a much needed tonic against the bitterness of the previous week’s stupidities. I just hope that the other delegates feel as welcomed into my world as I was into theirs.