Visualizing Del.icio.us Roundup I have been coming across many del.icio.us tools to visualize usage during my daily researching hours. So many, that I have decided to start making note of the ones I come across. From the span of about two weeks, I have been collecting as many as I could find. There’s a couple more that I have in mind, but they don’t seem to be working at the moment. Cogito » Blog Archive » The Ontology Myth For the past year, I have been observing a phenomenon in the US market, that of the spread of the ‘myth’ of ontology. Ontologies are important elements for understanding text through semantic analysis, but they are insufficient (and, often, not even necessary) to resolve the problem of how to handle unstructured knowledge. Nonetheless, according to this ‘idea,’ they say that if you have a complete ontology, you don’t need anything else. Instead, semantic technology should be able to do it all automatically (for example, the typical activities correlated to knowledge management activities such as automatic categorization and discovery of knowledge and relationships between data). This assumption lacks substance, and even if I understand the reasons why this idea has spread (in the end, we are all always searching for fast and automatic solutions) it is important to explain the reality (which is completely different from the utopic view that some would have you believe). 0Share
Loi de lotka Loi de lotka (1926) : L’objectif de la loi est de mesurer la contribution de chaque chercheur « publiant » au progrès scientifique. Elle stipule qu’un effectif réduit de chercheurs produit un nombre important de publication. En effet, en réalisant une étude sur la distribution des auteurs scientifiques, Lotka est arrivé à la conclusion suivante : le nombre de chercheurs « publiant » ni qui écrivent i articles est égale à : Avec : i max la productivité maximale d’un chercheur. Exemple : supposons que i vari de 1 à 6 et que le nombre de chercheurs ayant publié un seul article est n1=100, la loi de Lotka permet de calculer ni : Le nombre de chercheurs ayant publiés deux articles est de 25, trois articles 11…. ,3 ayant publiés 6 articles i max.
The Cochrane Collaboration - Evidence based healthcare Are scientific methods used to determine which drugs and procedures are best for treating diseases? The answers may surprise you. Modern health care is undergoing a long-overdue and dramatic evolution. Evidence-based health care Evidence-based health care is the conscientious use of current best evidence in making decisions about the care of individual patients or the delivery of health services. The Evidence-based Medicine TriadSource: Florida State University, College of Medicine. Systematic reviews A systematic review is a high-level overview of primary research on a particular research question that tries to identify, select, synthesize and appraise all high quality research evidence relevant to that question in order to answer it.i Key Points:  Cochrane AL.  Gray JAM. 1997.  Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. 1996.
Ontology (information science) In computer science and information science, an ontology formally represents knowledge as a hierarchy of concepts within a domain, using a shared vocabulary to denote the types, properties and interrelationships of those concepts. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework. The term ontology has its origin in philosophy and has been applied in many different ways. What many ontologies have in common in both computer science and in philosophy is the representation of entities, ideas, and events, along with their properties and relations, according to a system of categories.
19 of the Best Infographics from 2010 Research can sometimes be a bit of a chore, but when knowledge is wrapped up in charts, cartoons, or even some heart-holding robots, suddenly "information" isn't such a scary word. What do Facebook's 500 million users look like? Who's suing whom in the mobile world? How does FarmVille stack up against actual farms? Have a look through the list and let us know which graphics you liked best (or learned the most from) in the comments below. Survey Shows the Internet Would Have Passed Prop 19Prop 19, California’s controversial bid to legalize marijuana, lost at the polls on Tuesday, but if that vote had been up to the wider web of Internetusers, Prop 19 would have passed.Social Media’s Impact on the Midterm Elections [INFOGRAPHICS]Social media, especially Facebook, had a huge impact on how the U.S. midterm elections were perceived and decided.
Loi de Zipf Un article de Wikipédia, l'encyclopédie libre. Pour les articles homonymes, voir Zipf. La loi de Zipf est une observation empirique concernant la fréquence des mots dans un texte. Elle a pris le nom de son auteur, George Kingsley Zipf (1902-1950). Genèse[modifier | modifier le code] le mot le plus courant revenait 8 000 fois ;le dixième mot 800 fois ;le centième, 80 fois ;et le millième, 8 fois. Ces résultats semblent, à la lumière d'autres études que l'on peut faire en quelques minutes sur son ordinateur, un peu trop précis pour être parfaitement exacts — le dixième mot dans une étude de ce genre devrait apparaître dans les 1 000 fois, en raison d'un effet de coude observé dans ce type de distribution. où K est une constante. Point de vue théorique[modifier | modifier le code] avec s juste légèrement plus grand que 1. Définition mathématique[modifier | modifier le code] Notons les paramètres de la loi de Zipf par pour le nombre d'éléments (de mots), leur rang, et le paramètre . où .
66 Successful Bloggers and What they can teach you Welcome Problogger readers! Thanks, Darren for sending your readers here. At the bottom of this post I will tell you how I came up with this list and the order. Here's the list. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. Ted Demopoulos owns a consulting firm, Demopoulos Associates. The list is his and Kaplan Publishing's and comes from his new book - What No One EVER Tells You About Blogging and Podcasting. The order above is the order of appearance in Ted's book. Full disclosure - I get NOTHING from the sales of Ted's book. The reader gets a LOT. One more thing.... Do you think your blog belongs on the list of successful blogs? If so, send me a link and tell me why. Enjoy!
Tag (metadata) Tagging was popularized by websites associated with Web 2.0 and is an important feature of many Web 2.0 services. It is now also part of some desktop software. A Description of the Equator and Some Otherlands, collaborative hypercinema portal, produced by documenta X, 1997. Online and Internet databases and early websites deployed them as a way for publishers to help users find content. Tagging has gained wide popularity due to the growth of social networking, photography sharing and bookmarking sites. Websites that include tags often display collections of tags as tag clouds. Many blog systems allow authors to add free-form tags to a post, along with (or instead of) placing the post into categories. An official tag is a keyword adopted by events and conferences for participants to use in their web publications, such as blog entries, photos of the event, and presentation slides. Others General
SPEAR Algorithm The SPEAR algorithm is a tool for ranking users in social networks by their expertise and influence within the community. In 2009, my co-worker Ching-man Au Yeung from University of Southampton and I presented the SPEAR ranking algorithm in our joint paper Telling Experts from Spammers: Expertise Ranking in Folksonomies at the ACM SIGIR 2009 Conference in Boston, USA. The graph-based SPEAR ranking algorithm (Spamming-resistant Expertise Analysis and Ranking) is a new technique to measure the expertise of users by analyzing their activities. The focus is on the ability of users to find new, high quality information in the Internet. At the same time, the algorithm has been shown to be very resistant to spamming attacks. The original use case – and the one described in our SIGIR paper – has been to find expert users and high quality websites for a given topic on the social bookmarking service Delicious.com, back in 2009 still a Yahoo! The two main elements of the SPEAR algorithm are: C.