background preloader

Données sur 12 millions d'entreprises mondiales

Want to work for the most exciting Open Data company in the world? We're hiring! Log in/Sign up OpenCorporates The Open Database Of The Corporate World Share this page

https://opencorporates.com/

Data Science Toolkit Usage Command Line on OS X and Linux Download python_tools.zip, extract into a new folder, cd into it and run ./install This will create a set of scripts you can run directly from the command line, like this: html2text | text2people Performance is a Feature We've always put a heavy emphasis on performance at Stack Overflow and Stack Exchange. Not just because we're performance wonks (guilty!), but because we think speed is a competitive advantage. There's plenty of experimental data proving that the slower your website loads and displays, the less people will use it. [Google found that] the page with 10 results took 0.4 seconds to generate. New Eurostat website - Eurostat 15 December 2014 The website has been subject to a complete design overhaul to make it more attractive and easier to use, although the overall structure of the website will remain the same. Furthermore, the technological infrastructure supporting the website has been replaced. The data extraction and visualization tools will not change and keep the same functionality. What will change for you: URL changes - please update your bookmarks accordingly- The root URL will change The bulk download URL will change In this Excel file you can find a mapping of the links of the sections between the old and the new website.

List of datasets for machine learning research - Wikipedia These datasets are used for machine learning research and have been cited in peer-reviewed academic journals and other publications. Datasets are an integral part of the field of machine learning. Major advances in this field can result from advances in learning algorithms (such as deep learning), computer hardware, and, less-intuitively, the availability of high-quality training datasets.[1] High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets for unsupervised learning can also be difficult and costly to produce.[2][3][4][5] This list aggregates high-quality datasets that have been shown to be of value to the machine learning research community from multiple different data repositories to provide greater coverage of the topic than is otherwise available.

jar If you are using packages, for jar.exe you must be in the root directory when using jar.exe to ensure your package names get properly included because the names you give jar.exe on the command line are the names it will blindly use for your packages inside the created jar file. You don’t want any names like C:\com\mindprod\mypackage.MyClass.class or \com\mindprod\mypackage.MyClass.class or MyClass.class (unless you have no packages). You want names like com\mindprod\mypackage\MyClass.class which translates to com.mindprod.mypackage.MyClass.class on the command line. They have to make sense both as filenames and as fully qualified package/class names. If you are using packages, you must be in the directory where the class files are when you build your jar.

Aleph - Databases Insert paragraph here... Aleph is a tool, built by Friedrich Lindenberg during his ICFJ Knight Fellowship, for indexing large amounts of both text (PDF, Word, HTML) and tabular (CSV, XLS, SQL) data for easy browsing and search. It is built with investigative reporting as a primary use case, and it allows cross-referencing mentions of well-known people and companies against watch lists, which are built from prior research or public data sets. The tool was first developed for ANCIR, as part of Grano, a reporting tool for investigating the connections between public and private officials. Following his ICFJ Knight Fellowship, Lindenberg used Aleph to power a data search feature for OCCRP's Investigative Dashboard. This investigative tool lets reporters search more than 2.4 million documents and datasets from previous OCCRP investigations as well as official sources and other scraped data.

Famous Perl One-Liners Explained, Part I: File Spacing Hi all! I am starting yet another article series here. Remember my two articles on Awk One-Liners Explained and Sed One-Liners Explained? They have received more than 150,000 views total now and they attract several thousand new visitors every week. Inspired by their success, I am going to create my own perl1line.txt file and explain every single oneliner in it. I hope it becomes as popular as awk1line.txt and sed1line.txt.

Free Image Hosting, Photo sharing & Earn Money Benefits We Pay Up to $4.20 For Each 1000 Image Views. Minimum Payment only $4.00. We Pay 10% earnings of your each referral for lifetime. No Hidden Rules To Hold Your Payments. New University of Utah center offers some serious computing muscle to handle 'extreme data' SALT LAKE CITY — A picture may be worth a thousand words, but in the fields of science, they can be worth billions and billions of bytes of information. A few years ago, then Hewlett-Packard CEO Mark Hurd said "more data will be created in the next four years than in the history of the planet." Hurd's prediction was understated. Studies have showed that humanity has created more computer data than all documents in the entire past 40,000 years — and that was in 2007. Housed within tall whirring towers of servers at the University of Utah's Scientific Computing and Imaging Institute are mind-boggling amounts of information: global weather data, chemistry combustion simulations on space shuttle heat shield panels, or physics experiments, just to name a few.

Related:  shamikastaggsRegistros Comerciales de Empresas