background preloader

Science

Facebook Twitter

Interactive Dynamics for Visual Analysis. Graphics Jeffrey Heer, Stanford University Ben Shneiderman, University of Maryland, College Park The increasing scale and availability of digital data provides an extraordinary resource for informing public policy, scientific discovery, business strategy, and even our personal lives. To get the most out of such data, however, users must be able to make sense of it: to pursue questions, uncover patterns of interest, and identify (and potentially correct) errors.

In concert with data-management systems and statistical algorithms, analysis requires contextualized human judgments regarding the domain-specific significance of the clusters, trends, and outliers discovered in data. Visualization provides a powerful means of making sense of data. The goal of this article is to assist designers, researchers, professional analysts, procurement officers, educators, and students in evaluating and creating visual analysis tools.

Some visualization system designers have explored alternative approaches. What are most common reasons for rejecting a paper? Academic.research.microsoft. Academic.research.microsoft. How NOT to review a paper. An Honest Academic Rejection Letter. Coming Soon - Stay tuned for a BIG announcement about an awesome project Jorge is working on! PHD Store - Our store was down for a while, but now it is back! Free excerpt from The PHD Movie 2! - Watch this free clip from the movie that Nature called "Astute, funny"! Watch the new movie! - The PHD Movie 2 screenings are in full swing! Check out the schedule to catch the screening nearest you. Filming is done! Coming to Campuses this Fall! The Science Gap - Watch Jorge's TEDx Talk: Travailler au CNRS | CNRS. Chaque année, le CNRS recrute des chercheurs, ingénieurs, techniciens et personnels administratifs permanents pour se consacrer à la recherche et à son accompagnement.

Ils travaillent au sein du premier organisme de recherche européen qui comptabilise dans ses rangs environ 25 000 personnels statutaires (11 106 chercheurs, 13 511 ingénieurs et techniciens) et 7 327 non titulaires de droit public . Ils font partie d'un organisme réformé qui sait moderniser son administration, développer des partenariats avec les mondes académiques et économiques, mener une politique audacieuse à l'international et communiquer ses résultats auprès de très nombreux publics. En 2015, ce sont environ 300 chercheurs et 317 ingénieurs et techniciens qui ont été recrutés dans les 1 116 unités de recherche et de service du CNRS. Par ailleurs environ 6 145 doctorants et autres chercheurs contractuels (dont 49 % d'étrangers)* sont présents sur une année dans les laboratoires de l'organisme.

Le scandale Stapel, ou comment un homme seul a dupé le système scientifique. Si l’on devait choisir un cas d’école récent pour la fraude scientifique, le scandale lié aux travaux du Néerlandais Diederik Stapel ferait un excellent candidat. A lui seul, ce chercheur a durablement écorné l’image de toute une discipline, la psychologie sociale, et mis en lumière quelques failles du système scientifique. L’affaire a éclaté à la fin du mois d’août 2011, à l’université de Tilburg, où Diederik Stapel enseignait : trois jeunes chercheurs ont alors fait état de leur suspicion pour les données de ses expériences, tant celles qui figuraient dans les études qu’ils publiait que celles qu’il fournissait à ses étudiants. Très rapidement, il s’est avéré que le professeur avait falsifié voire inventé des jeux entiers de données, ce que Diederik Stapel, auteur de quelques articles retentissants, a d’ailleurs rapidement reconnu, dès septembre 2011. Pour masquer ses agissements, le chercheur, qui était une petite vedette dans son domaine, avait une technique bien rodée.

The Paper Reviewing Process | How to Do Great Research. Learning how to review papers not only (obviously) makes you a better reviewer, but it can also help you as an author, since an understanding of the process can help you write your paper submissions for an audience of reviewers. If you know the criteria that a reviewer will use to judge your paper, you are in a much better position to tailor your paper so that it has a higher chance of being accepted. There are many good resources that describe the paper reviewing process already, including those that explain the process (and its imperfections) and those that provide instructions for writing a good review (as well as techniques to avoid). There are also a few nice summaries of the review process for conferences in different areas of computer science that lend visibility into the process (e.g., here and here).

Program committee chairs sometimes provide guidelines for writing reviews, such as these. The Review Process Why understanding the review process is important. Invariant questions. Why Academic Papers Are A Terrible Discussion Forum | The Rationalist Conspiracy. This article was written for the website Less Wrong, and is cross-posted here. I won’t try to argue that papers aren’t worth publishing. There are many reasons to publish papers – prestige in certain communities and promises to grant agencies, for instance – and I haven’t looked at them all in detail. However, I think there is a conclusive case that as a discussion forum – a way for ideas to be read by other people, evaluated, spread, criticized, and built on – academic papers fail.

Why? 1. Ideas structured like the Less Wrong Sequences, with large inferential distances between beginning and ending, have huge webs of interdependencies: to read A you have to read B, which means you need to read C, which requires D and E, and on and on and on. For this to happen, ideas need to get out there – whether orally or in writing – so others can build on them. 2. This problem is familiar to anyone who’s done research outside a university. 3. 4. 5.

There’s nothing comparable for academic papers. 6. How to peer review scientfic work. Avoid decision fatigue I've written a few hundred peer reviews. The dominant factor in whether I wrote a high- or low-quality review was decision fatigue. Decision fatigue impacts far more than peer review. As humans, we make decisions all day long. Each decision draws from a finite (but replenishable) cognitive resource. Reviewing an entire scientific manuscript drains that resource. The book Willpower is a compelling survey of the research in this regard: I recommend it to all grad students. The book presents advice that we can adapt in the context of peer review: Review one paper per day. I've never observed all of these rules for all papers in a single venue. There have been times when it was mathematically impossible to review only one paper per day. But, I do my best to stick to these principles. Recognize conflicts You can have several kinds of conflict of interest. Many venues use the NSF's proposal-reviewing conflict of interest policy as the basis for their conflict policy.

Friends Foes. Ike Antkare : le chercheur renommé… qui n’existe pas. Sur la toile, Ike Antkare figure parmi les dix premiers chercheurs en science informatique. Pourtant ce chercheur n’existe pas ! Son créateur, l’enseignant-chercheur Cyril Labbé, démontre ainsi les limites de l’évaluation « quantitative » basée sur la mesure du nombre de citations.

Né cette année, Ike Antkare est déjà l’un des dix premiers chercheurs en science informatique et figure parmi les 100 scientifiques les plus renommés au monde, devant Albert Einstein ! Travaillant à l’International Institute of Technology United Slates of Earth, ce petit génie a publié, selon Google Scholar (1), 102 articles qui ont été repris et cités à maintes reprises sur la toile. Pourtant, ce chercheur renommé n’existe pas ! Le vrai-faux chercheur Âgé de 37 ans, Cyril Labbé est maître de conférence à l’université Joseph Fourier (Grenoble 1) et chercheur en science informatique. « Mes travaux portent sur les bases de données et flux de données », précise-t-il. Des textes référencés sans queue ni tête. When Reviews Do More Than Sting | February 2013.

By Bertrand Meyer Communications of the ACM, Vol. 56 No. 2, Pages 8-9 10.1145/2408776.2408780 Comments (10) Bertrand Meyer wonders why malicious reviews run rampant in computer science. The full text of this article is premium content Need Access? Please select one of the options below for access to premium content and features. Create a Web Account If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site. Join the ACM Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.

Subscribe to Communications of the ACM Magazine Get full access to 50+ years of CACM content and receive the print version of the magazine monthly. Purchase the Article Non-members can purchase this article or a copy of the magazine in which it appears. Phylomemetic Patterns in Science Evolution—The Rise and Fall of Scientific Fields. Abstract We introduce an automated method for the bottom-up reconstruction of the cognitive evolution of science, based on big-data issued from digital libraries, and modeled as lineage relationships between scientific fields.

We refer to these dynamic structures as phylomemetic networks or phylomemies, by analogy with biological evolution; and we show that they exhibit strong regularities, with clearly identifiable phylomemetic patterns. Some structural properties of the scientific fields - in particular their density -, which are defined independently of the phylomemy reconstruction, are clearly correlated with their status and their fate in the phylomemy (like their age or their short term survival). Citation: Chavalarias D, Cointet J-P (2013) Phylomemetic Patterns in Science Evolution—The Rise and Fall of Scientific Fields. PLoS ONE 8(2): e54847. doi:10.1371/journal.pone.0054847 Editor: Eduardo G. Altmann, Max Planck Institute for the Physics of Complex Systems, Germany Introduction of. Gaming Google Scholar Citations, Made Simple and Easy | The Scholarly Kitchen. Marco Pantani on the way to Alpe d’Huez 1997 (Photo credit: Wikipedia) When metrics are adopted as evaluative tools, there is always a temptation to game them.

Without rules and sanctions to prevent widespread manipulation, metrics lose their relevance, become meaningless, and are quickly disregarded by those who once believed that they stood for something important. Why count Facebook Likes and Tweets when you can purchase thousands of them for just a few dollars? For these metrics to remain robust indicators of something meaningful, it is important to keep the cheats out of the system.

Each year Thomson Reuters, producers of the Journal Impact Factor, puts dozens of journals in time-out for manipulating their numbers through self-citation. Thomson Reuters has a vested interest in keeping their citation database clean for a simple reason — they profit by selling their data and services to universities, publishers, governments, and funding agencies. Like this: Like Loading...