background preloader

Lexico Web Page (downloadable app for the PC)

Lexico Web Page (downloadable app for the PC)
Cédric Lamalle, William Martinez, Serge Fleury, André Salem Lexico3 est réalisé par l’équipe universitaire SYLED-CLA2T. Ce logiciel fait l’objet d’une diffusion commerciale. Si vous êtes un chercheur isolé, vous pouvez vous en servir momentanément, pour vos travaux personnels. Si par contre votre laboratoire, votre entreprise, peut acquérir ce logiciel, cela nous aidera à le développer. A votre demande, nous vous enverrons une facture émise par l'agent comptable de l'Université Paris 3 Sorbonne nouvelle, en commençant par une facture "pro forma" si vous le souhaitez (précisez à qui elle doit être adressée). English Version Tutorial LEXICO Animation Quicktime : Démo Lexico3 Explorations textométriques Nous rassemblons actuellement plusieurs compte-rendus d'expériences réalisées avec les logiciels de la famille Lexico au cours de nombreuses recherches et dans le cadre de collaborations diverses. Ouvrage de référence Lebart, L. & Salem, A. (1994). Glossaire de Statistiques Textuelles

http://www.tal.univ-paris3.fr/lexico/

Related:  Text Analytics

untitled (Ludovic Lebart et André Salem) Préface de Christian Baudelot Chapitre 0 : Préface, Sommaire, Avant Propos, Introduction (format pdf) Chapitre 1 : Domaines et problèmes (format pdf) Le premier chapitre, Domaines et problèmes, évoque à la fois : les domaines disciplinaires concernés (linguistique, statistique, informatique), les problèmes et les approches. MAXQDA MAXQDA is a professional, powerful, and easy to use QDA software for your qualitative text or content analysis. MAXQDAplus 11 The extended version of MAXQDA includes the dictionary and content analysis tool MAXDictio. MAXQDAplus 11 MAXReader

Using CLAN Warning: After installing a new version of CLAN for use with old data, you will need to get a new version of the MOR grammar and run MOR, POST, and CHECK again on your old data to make sure they work with the newer format. Alternatively, you may wish continue using old versions of CLAN with old versions of corpora. However, CHILDES data on the web are always updated to run with new versions of CLAN. For Windows: CLANWin is for Windows XP, Vista, 7, 2000.

The Signature Stylometric System The aim of this website is to highlight the many strong links between Philosophy and Computing, for the benefit of students of both disciplines: For students of Philosophy who are seeking ways into formal Computing, learning by discovery about programming, how computers work, language processing, artificial intelligence, and even conducting computerised thought experiments on philosophically interesting problems such as the evolution of co-operative behaviour. For students of Computing who are keen to see how their technical abilities can be applied to intellectually exciting and philosophically challenging problems.

HyperRESEARCH Qualitative analysis withHyperRESEARCH No complications. At Researchware, we believe studying our qualitative world should be straightforward, and we design our software accordingly. HyperRESEARCH supports your needs, instead of trying to squeeze you into someone else's pet methodology.Learn More About HyperRESEARCH's Ease of Use Linguistics and the Book of Mormon According to most adherents of the Latter Day Saint movement, the Book of Mormon is a 19th-century translation of a record of ancient inhabitants of the American continent, which was written in a script which the book refers to as "reformed Egyptian."[1][2][3][4][5] This claim, as well as virtually all claims to historical authenticity of the Book of Mormon, are generally rejected by non–Latter Day Saint historians and scientists.[6][7][8][9][10] Linguistically based assertions are frequently cited and discussed in the context of the subject of the Book of Mormon, both in favor of and against the book's claimed origins. Both critics and promoters of the Book of Mormon have used linguistic methods to analyze the text.

Weft QDA Weft QDA is (or was) an easy-to-use, free and open-source tool for the analysis of textual data such as interview transcripts, fieldnotes and other documents. An excerpt from my MSc dissertation explains the thinking behind the software in more detail. The software isn’t being maintained or updated, but the most recent version is available for interest. This version includes some standard CAQDAS features: (Follow the links to see screenshots) Import plain-text documents from text files or PDF Character-level coding using categories organised in a tree structure Retrieval of coded text and ‘coding-on’ Simple coding statistics Fast free-text search Combine coding and searches using boolean queries AND, OR, AND NOT ‘Code Review’ to evaluate coding patterns across multiple documents Export to HTML and CSV formats Using Weft QDA

Analysis Jean Lievens: Wikinomics Model for Value of Open Data Categories: Analysis,Architecture,Balance,Citizen-Centered,Data,Design,Graphics,ICT-IT,Knowledge,Policies-Harmonization,Processing,Strategy-Holistic Coherence Jean Lievens Graphing the history of philosophy « Drunks&Lampposts A close up of ancient and medieval philosophy ending at Descartes and Leibniz If you are interested in this data set you might like my latest post where I use it to make book recommendations. This one came about because I was searching for a data set on horror films (don’t ask) and ended up with one describing the links between philosophers. To cut a long story very short I’ve extracted the information in the influenced by section for every philosopher on Wikipedia and used it to construct a network which I’ve then visualised using gephi

BigSee < Main < WikiTADA This page is for the SHARCNET and TAPoR text visualization project. Note that it is a work in progress as this is an ongoing project. At the University of Alberta we picked up the project and gave a paper at the Chicago Colloquium on Digital Humanities and Computer Science with the title | The Big See: Large Scale Visualization. The Big See is an experiment in high performance text visualization. We are looking at how a text or corpus of texts could be represented if processing and the resolution of the display were not an issue. Most text visualizations, like word clouds and distribution graphs, are designed for the personal computer screen. Stylometry Stylometry is often used to attribute authorship to anonymous or disputed documents. It has legal as well as academic and literary applications, ranging from the question of the authorship of Shakespeare's works to forensic linguistics. History[edit] Stylometry grew out of earlier techniques of analyzing texts for evidence of authenticity, authorial identity, and other questions.

Zipf's law Zipf's law /ˈzɪf/, an empirical law formulated using mathematical statistics, refers to the fact that many types of data studied in the physical and social sciences can be approximated with a Zipfian distribution, one of a family of related discrete power law probability distributions. The law is named after the American linguist George Kingsley Zipf (1902–1950), who first proposed it (Zipf 1935, 1949), though the French stenographer Jean-Baptiste Estoup (1868–1950) appears to have noticed the regularity before Zipf.[1] It was also noted in 1913 by German physicist Felix Auerbach[2] (1856–1933). Motivation[edit] Zipf's law states that given some corpus of natural language utterances, the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc. Theoretical review[edit]

The Stanford NLP (Natural Language Processing) Group About | Citing | Questions | Download | Included Tools | Extensions | Release history | Sample output | Online | FAQ A natural language parser is a program that works out the grammatical structure of sentences, for instance, which groups of words go together (as "phrases") and which words are the subject or object of a verb. Probabilistic parsers use knowledge of language gained from hand-parsed sentences to try to produce the most likely analysis of new sentences. These statistical parsers still make some mistakes, but commonly work rather well. Their development was one of the biggest breakthroughs in natural language processing in the 1990s.

Parsing Within computational linguistics the term is used to refer to the formal analysis by a computer of a sentence or other string of words into its constituents, resulting in a parse tree showing their syntactic relation to each other, which may also contain semantic and other information. The term is also used in psycholinguistics when describing language comprehension. In this context, parsing refers to the way that human beings analyze a sentence or phrase (in spoken language or text) "in terms of grammatical constituents, identifying the parts of speech, syntactic relations, etc." [2] This term is especially common when discussing what linguistic cues help speakers to interpret garden-path sentences.

Related: