background preloader

Repository maps

Repository maps
Related:  Software, Systems, Hosted SolutionsINVESTIGACIÓN

DuraSpace Technologies The DuraSpace technology portfolio crosses the boundaries of institutional systems, the Web, and cloud infrastructure and inherently addresses representation and preservation of digital content. Our open source software and services help to ensure that current and future generations have access to our collective digital heritage and currently power more than 2,000 sites in 90 countries. DSpace ( is a turnkey institutional repository application, Fedora ( is a framework for building digital repositories, VIVO ( is a locally hosted system for showcasing the scholarship of an institution, and DuraCloud ( an open source platform and managed service that provides on-demand storage and services for digital content in the cloud. We are continually improving and expanding DuraSpace open technologies to provide you with durable, flexible software solutions that integrate seamlessly with your infrastructure.

Find - Directory of Open Access Repositories Search or Browse for Repositories <intR>²Dok "Ergani - Historical Archive of Aegean" Repository 4TU.Centre for Research Data AAB College repository Aaltodoc Publication Archive ABACUS. Aberdeen University Research Archive (AURA) Abertay Research Collections CADAIR (Aberystwyth University Repository) ABU Zaria Research Publications Academic Digital Library (Akademickiej Bibliotece Cyfrowej) (ABC - KRAKÓW) Academic Research Repository at the Burgas Free University (Научен портал на Бургаския свободен университет) ARRChNU (Academic Research Repository at the ChNU) Academic Research Repository at the Institute of Developing Economies (ARRIDE) Academica-e ARCA - IGC (Access to Research and Communications Annals) ARAN (Access to Research at National University of Ireland, Galway) ARRT (Access to Research Resources for Teachers) ACEReSearch Acervo Digital da Unesp

eSciDoc.PubMan, a publication repository software MPG.PuRe This is the publication repository of the Max Planck Society. It contains bibliographic data and numerous fulltexts of the publications of its researchers. The repository is based on eSciDoc.PubMan, a publication repository software developed by the Max Planck Digital Library. Currently we are working on the migration of the data base of the predecessor system eDoc into this repository. Read more Search for publications here ... or browse through different categories. Tools and Interfaces Search and Export Do you want to integrate your PubMan Data within an external system? Service for data transfer Do you want to fetch data from external sources like arXiv, BioMed Central, PubMed Central or Spires? Validation service Check your XML data for formal correctness with our validation service. Control of Named Entities (CoNE) Search and administrate controlled vocabularies for persons, journals, classifications or languages. SWORD-Interface

A Contratiempo | Sonidos y sentidos: Entrevista con Steven Feld Sonidos y sentidos: Entrevista con Steven FeldRita de Cácia Oenning da Silva 1Universidad Federal de Santa Catarina Aunque para cualquier etnomusicólogo Steven Feld no requiere presentación, la presente entrevista titulada Sonidos y Sentidos, que apareció recientemente en la Revista de antropología de la Universidad de Sao Paulo 2, abre con una introducción de su realizadora, Rita de Cácia, en la que da buena cuenta de lo interesante y valioso del entrevistado y de la entrevista misma. Enfatiza también Feld en la importancia de publicar en formatos alternativos al texto: Cds, Dvds, para ver y escuchar, y se esfuerza por argumentar la importancia de realizar antropología no solo del sonido sino a través del sonido, pero a la vez reconoce las dificultades del aval académico con que aún cuentan estos formatos y medios de divulgación del conocimiento. De paso, por si fuera poco, nos ilustra sobre su vida, su postura ante la investigación, y su proyecto editorial de libros, discos y videos.

EPrints | open-source digital repository platform About EPrints Welcome to the home of EPrints, the world-leading open-source digital repository platform. Developed at the University of Southampton, EPrints has been providing stable, innovative repository services across the academic sector and beyond for over 15 years. We are proud of the stability, flexibility and pragmatism of our software. About EPrints Services EPrints Services is our not-for-profit commercial services organisation, which has been building & hosting repositories, training users and developing bespoke functionality for over 10 years. The EPrints team is committed to working closely with clients to develop tailor-made repositories that fulfil their requirements, and we are proud to be supporting EPrints installations throughout the world.

SONIC IDEAS Bienvenidos Ideas Sónicas / Sonic Ideas es uno de los frutos del CMMAS y al mismo tiempo es el primero de una serie de proyectos orientados a la investigación y difusión de todos los aspectos relacionados con la música, las artes sonoras y la tecnología. Esta publicación está dirigida a un amplio grupo de lectores, incluyendo investigadores, artistas, músicos, compositores, intérpretes y musicólogos. El objetivo principal de esta revista semestral, es estimular, generar y difundir información sobre las actividades y los desarrollos en el área, promoviendo la interacción entre compositores, intérpretes, investigadores y escuchas anglo e hispanoparlantes. Abierta a todos los puntos de vista estéticos dentro y fuera del ámbito académico, esta publicación pretende difundir nuevas y desafiantes perspectivas para aproximarse a la tecnología, explorar la influencia de ésta en la música y en las artes sonoras y promover la investigación seria y el debate sobre estos temas. Welcome

CNRI Handles - Handle.Net Registry Deep Web Search Engines | Deep Web Search - A How-To Site Where to start a deep web search is easy. You hit Google.com and when you brick wall it, you go to scholar.google.com which is the academic database of Google. After you brick wall there, your true deep web search begins. You need to know something about your topic in order to choose the next tool. To all the 35F and 35G’s out there at Fort Huachuca and elsewhere, you will find some useful links here to hone in on your AO. If you find a bad link, Comment the link below. Last updated July 12, 2016 – updated reverse image lookup. Multi Search engines Deeperweb.com – (broken as of Sept 2016, hopefully not dead) This is my favorite search engine. Surfwax – They have a 2011 interface for rss and a 2009 interface I think is better. www.findsmarter.com – You can filter the search by domain extension, or by topic which is quite neat. Cluster Analysis Engine TouchGraph – A brilliant clustering tool that shows you relationships in your search results using a damn spiffy visualization. General Videos

PURL help What is a PURL? A PURL is a persistent URL, it provides a permanent address to access a resource on the web. When a user retrieves a PURL they will be redirected to the current location of the resource. When an author needs to move a page they can update the PURL to point to the new location. PURLs with a common prefix are grouped together into domains. PURL types Each PURL has a target and status code or type. A partial PURL is a special type which will match the beginning of a URL. The HTTP status code definitions provides more detail about the different status codes and thier meanings. Claiming a PURL domain The PURL service is now administered by the Internet Archive. Administering PURLs The PURL system is a service of the Internet Archive. Search for a PURL PURLs are grouped into domains, domains can be searched from the home page. The domain search shows a list of domains that match the search criteria. Viewing the contents of a PURL domain Version This is version 1.2.2.

White Paper: The Deep Web: Surfacing Hidden Value This White Paper is a version of the one on the BrightPlanet site. Although it is designed as a marketing tool for a program "for existing Web portals that need to provide targeted, comprehensive information to their site visitors," its insight into the structure of the Web makes it worthwhile reading for all those involved in e-publishing. —J.A.T. Searching on the Internet today can be compared to dragging a net across the surface of the ocean. Traditional search engines create their indices by spidering or crawling surface Web pages. The deep Web is qualitatively different from the surface Web. If the most coveted commodity of the Information Age is indeed information, then the value of deep Web content is immeasurable. The Deep Web Internet content is considerably more diverse and the volume certainly much larger than commonly understood. First, though sometimes used synonymously, the World Wide Web (HTTP protocol) is but a subset of Internet content. How Search Engines Work Figure 1.

Persistent uniform resource locator - Wikipedia A persistent uniform resource locator (PURL) is a uniform resource locator (URL) (i.e., location-based uniform resource identifier or URI) that is used to redirect to the location of the requested web resource. PURLs redirect HTTP clients using HTTP status codes. PURLs are used to curate the URL resolution process, thus solving the problem of transitory URIs in location-based URI schemes like HTTP. Technically the string resolution on PURL is like SEF URL resolution. History[edit] The PURL concept was developed at OCLC (the Online Computer Library Center) in 1995 and implemented using a forked pre-1.0 release of Apache HTTP Server.[1] The software was modernized and extended in 2007 by Zepheira under contract to OCLC and the official website moved to (the 'Z' came from the Zepheira name and was used to differentiate the PURL open-source software site from the PURL resolver operated by OCLC). PURL version numbers may be considered confusing. Principles of operation[edit]

The Ultimate Guide to the Invisible Web Search engines are, in a sense, the heartbeat of the internet; “Googling” has become a part of everyday speech and is even recognized by Merriam-Webster as a grammatically correct verb. It’s a common misconception, however, that Googling a search term will reveal every site out there that addresses your search. Typical search engines like Google, Yahoo, or Bing actually access only a tiny fraction — estimated at 0.03% — of the internet. The sites that traditional searches yield are part of what’s known as the Surface Web, which is comprised of indexed pages that a search engine’s web crawlers are programmed to retrieve. "As much as 90 percent of the internet is only accessible through deb web websites." So where’s the rest? So what is the Deep Web, exactly? Search Engines and the Surface Web Understanding how surface pages are indexed by search engines can help you understand what the Deep Web is all about. How is the Deep Web Invisible to Search Engines? Reasons a Page is Invisible Art

Handle System - Wikipedia As with handles used elsewhere in computing, Handle System handles are opaque, and encode no information about the underlying resource, being bound only to metadata regarding the resource. Consequently, the handles are not rendered invalid by changes to the metadata. The system was developed by Bob Kahn at the Corporation for National Research Initiatives (CNRI). The original work was funded by the Defense Advanced Research Projects Agency (DARPA) between 1992 and 1996, as part a wider framework for distributed digital object services,[2] and was thus contemporaneous with the early deployment of the World Wide Web, with similar goals. The Handle System was first implemented in autumn 1994, and was administered and operated by CNRI until December 2015, when a new "multi-primary administrator" (MPA) mode of operation was introduced. Thousands of handle services are currently running. Specifications[edit] Some Handle System namespaces define special presentation rules. Implementation[edit]

News aggregator - Wikipedia Function[edit] Visiting many separate websites frequently to find out if content on the site has been updated can take a long time. Aggregation technology helps to consolidate many websites into one page that can show the new or updated information from many sites. Aggregators reduce the time and effort needed to regularly check websites for updates, creating a unique information space or personal newspaper. History[edit] RSS began in 1999 "when it was first introduced by Internet-browser pioneer Netscape".[2] In the beginning, RSS was not a user-friendly gadget and it took some years to spread. "...RDF-based data model that people inside Netscape felt was too complicated for end users Types[edit] There are 2 types of web aggregators:[6] those that simply gather material from various sources and put it on their web sites;those that gather and distribute content – after completing the appropriate organizing and processing – to suit their customers’ needs; News aggregation websites[edit]

Related: