My Photo

About Beth Kanter

Enter your email address:

Delivered by FeedBurner

Beth's Blog: Channels, Screencasts, and Videos

Awards, Nominations, and Board Memberships

May 2010

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          


Site Tracking

  • This is my Google PageRankā„¢ - SmE Rank free service Powered by Scriptme

« Life, Times, and Context of the NpTech Tag: An Informal Discussion/Reflection Online at CpSquared | Main | Andy Carvin Opens The Kimono At NPR on Social Media »


Feed You can follow this conversation by subscribing to the comment feed for this post.


I am not sure the average person would understand the graphic. For example the RSS icon is not clear to a newbie (I know cause I got burned on that one in a presentation recently).

Perhaps break it up like we explain search engines:
1) spider = consumption of info which in this case is manually specified by nptech tag
2) index = spiders retrieve but a different program chooses to store in the index or not. The corrollary here is the criteria of the pipe/lens/whatever.
3) query tool - this is the user interface part that is hopefully simple.

I dont know though. Typing this I may not be improving anything. Arrgh.


Awesome job! That's pretty much how it works. I know the system LOOKS complicated but that's only because our needs are various. If we do it right, there will be four user roles for nptech, the contributor, the feed consumer, the editor and the researcher with RSS bookmarklets for the contributor, Pligg for the editors/consumers and Google CSE (manseo) for the researchers.

The vast majority of contributors will just be bookmarking away. They won't notice the difference. However, the editors and consumers will be using the pligg site to see what's new in the nptech world. The editors at that point can help sort out and filter all the items. The researchers are the people who can do all the crazy statistical analysis and mashups via Google CSE and simply by downloading all the data to their desktops. We won't hold the raw data back from them.

The comments to this entry are closed.