background preloader

Tech

Facebook Twitter

AI

Apps. SERNEC/Symbiota. Connections. Digitizing Specimens. ESN. Geo. Literature. OCR. Portals. Problems. Visual. GBIF. iDigBio. iPhylo. Towards 3 million specimens: Digitising Amaryllidaceae & Alliaceae. The following blog was written by Iain Ratter a digitiser in the Herbarium. Since 2021 we have increased our digitisation capacity with the goal of getting to 1 million specimens imaged by Autumn 2024.

Each digitiser is assigned a family of plants to work through. This series of blogs will spotlight the families that have been completed by a member of the team. The Amaryllidaceae are a cosmopolitan family found in pantropical and subtropical zones. They are herbaceous and mostly grow from bulbs. The genus Allium and its allies are currently classified in the herbarium as Alliaceae, but this has been reduced to the subfamily Allioideae. Our collections at Edinburgh Amaryllidaceae Prior to the complete digitsation of the Amaryllidaceae we had a record for 767 specimens. Specimens of Amaryllidaceae can be searched on our online catalogue here. Alliaceae Prior to the complete digitsation of the Alliaceaewe had a record for 3,531 specimens. Top 5 regions Filing region definitions Top 5 Genera. Nanopublications tailored to biodiversity data.

Novel nanopublication workflows and templates for associations between organisms, taxa and their environment are the latest outcome of the collaboration between Knowledge Pixels and Pensoft. First off, why nanopublications? Nanopublications complement human-created narratives of scientific knowledge with elementary, machine-actionable, simple and straightforward scientific statements that prompt sharing, finding, accessibility, citability and interoperability. By making it easier to trace individual findings back to their origin and/or follow-up updates, nanopublications also help to better understand the provenance of scientific data.

With the nanopublication format and workflow, authors make sure that key scientific statements – the ones underpinning their research work – are efficiently communicated in both human-readable and machine-actionable manner in line with FAIR principles. Thus, their contributions to science are better prepared for a reality driven by AI technology. said Prof. Talking about a ‘schism’ is ahistorical | by Emily M. Bender | Jul, 2023 | Medium. In two recent conversations with very thoughtful journalists, I was asked about the apparent ‘schism’ between those making a lot of noise about fears inspired by fantasies of all-powerful ‘AIs’ going rogue and destroying humanity, and those seeking to illuminate and address actual harms being done in the name of ‘AI’ now and the risks that we see following from increased use of this kind of automation. Commentators framing these positions as some kind of a debate or dialectic refer to the former as ‘AI Safety’ and the latter as ‘AI ethics’.

In both of those conversations, I objected strongly to the framing and tried to explain how it was ahistorical. I want to try to reproduce those comments in blog form here. The problem with the ‘schism’ framing is that to talk about a ‘schism’ is to talk about something that once was a whole and now is broken apart — authors that use this metaphor thus imply that such a whole once existed. Scholarship and activism Fantasies of white supremacy. Thought-provoking paper about data curation & the interplay between craft, standards, visibility/acknowledgment of data curators in projects and institutions @an_dre_a_ Environmental Data Cube Support System (EDCSS) Environmental Data Cube Support System (EDCSS) As a provider of correlated, integrated natural environments to Department of Defense (DoD) modeling and simulation (M&S), the Environmental Data Cube Support System (EDCSS) is becoming a key component of the DoD M&S Data Enterprise.

Currently under continuing development, the EDCSS is a production capability focused on generating and distributing natural environmental data, effects, and products required to support M&S events. The EDCSS addresses integration across all environmental domains (air, ocean, space, terrain) by constructing environmental representations from authoritative source data providers and generating effects from DoD standard soil strength and mobility models as well as modeled sensor responses. The EDCSS allows for the selection of realistic historical scenarios as the basis of the environment representation. Visit metoc.org for more information on EDCSS and ERTB. TaxonWorks. How long does taxonomy take? Botanist Jessie Prebble can tell you | Te Papa’s Blog.

Taxonomic research involves a number of aspects, including field trips, lab work, studying and comparing live plants (in the field or glasshouse) or pressed specimens, and reading previous scientific papers. Not to mention analysing and interpreting the data, incorporating previously published research, and writing up the results for publication. Sometimes, such research forms the basis of a post-graduate thesis (Master’s or PhD). Curator Botany Heidi Meudt talks about one student’s journey. Jessie Prebble started her PhD research in 2012 when she went on one of her first forget-me-not field trips to Northland.

During the course of her PhD (2012–2016), she was based here at Te Papa for about two years, where she was able to work very closely with the collections and Botany staff in our herbarium. She finished her thesis in 2016 and is now a Botanist at Manaaki-Whenua – Landcare Research. Taxonomy takes time Jessie published the final chapter of her PhD thesis in May 2022. References cited. Thoughts on TreeBASE dying(?) @rvosa is Naturalis no longer hosting Treebase? Hilmar Lapp (@hlapp) May 10, 2022 So it looks like TreeBASE is in trouble, it's legacy Java code a victim of security issues. Perhaps this is a chance to rethink TreeBASE, assuming that a repository of published phylogenies is still considered a worthwhile thing to have (and I think that question is open).

Here's what I think could be done. The data (individual studies with trees and data) are packaged into whatever format is easiest (NEXUS, XML, JSON) and uploaded to a repository such as Zenodo for long term storage. There's lots of details to tweak, for example how many of the existing URLs for studies are preserved (some URL mapping), and what about the API? My sense is that the TreeBASE code is very much of its time (10-15 years ago), a monolithic block of code with SQL, Java, etc. Another other issue is how to support TreeBASE. Relevant code and sites. The LOTUS initiative for open knowledge management in natural products research.

Essential revisions:1) Reviewers would like to see the authors clarify the scope of their work and address to what extent this database will provide the community with a comprehensive database of structure organisms pairs, that is actually useable to answer research questions. This could be achieved by adding concrete examples of how these data could be used to help readers/reviewers understand the scope.

We thank the editors and reviewers for these suggestions. The scope of our work is dual: LOTUS was designed both to help the gathering and exploitation of past NP research output (structure-organism pairs) and to facilitate future formatting and reuse. LOTUS doesn’t only provide a wide set of curated and documented structure-organisms pairs, but also a set of tools to gather, organize, and interrogate them. Previous to this work, efficient access to specialized metabolites occurrences information was a complicated task.

Reviewers were right. This is indeed a very pertinent point. ICEDIG Project Outcomes. Addressing today’s global environmental challenges requires access to significant quantities of data. This holds especially true for the natural sciences, where one rich data trove remains unearthed: The European scientific collections. These jointly hold more than 1.5 billion objects, representing 80% of the world’s bio- and geo-diversity. With only 10 % of these objects digitised, their information remains vastly underused, thus impeding potential applications of this critical scientific resource. The EU-funded ICEDIG project – “Innovation and Consolidation for Large Scale Digitisation of Natural Heritage” - aims to support the implementation phase of the new Research Infrastructure DiSSCo (“Distributed System of Scientific Collections”) by designing and addressing the technical, financial, policy and governance aspects necessary to operate such a large distributed initiative for natural sciences collections across Europe.

IAPT | Grants. #OTD 1946 John Gregory Hawkes (Jack), was admitted as a Fellow of @LinneanSociety. He specialised in the taxonomy of the wild potato. With Dorothy Cadbury he mapped the flora of Warwickshire (1971) using new electronic data processing methods to sort and. Brief history of bioinformatics | Briefings in Bioinformatics. We use cookies to enhance your experience on our website. By clicking 'continue' or by continuing to use our website, you are agreeing to our use of cookies. You can change your cookie settings at any time. We use cookies to enhance your experience on our website.By continuing to use our website, you are agreeing to our use of cookies. You can change your cookie settings at any time. <a href=" Find out more</a> Skip to Main Content Advertisement Search Sign In Close Advanced Search Search Menu Article Navigation Volume 20 Issue 6 November 2019 Article Contents A brief history of bioinformatics Jeff Gauthier, Jeff Gauthier Institut de Biologie Intégrative et des Systèmes (IBIS) , Département de Biologie, Université Laval, 1030, av. de la Médecine, Québec, Canada Corresponding author: Jeff Gauthier, Institut de Biologie Intégrative et des Systèmes, 1030 avenue de la Médecine, Université Laval, Quebec City, QC G1V0A6, Canada.

Search for other works by this author on: a W. The Life and Death of Data. Thanks to @danielskatz for a thought-provoking lecture on #FAIR for #ResearchSoftware. We discussed whether it is time to think about #FAIR4DT: FAIR For #DigitalTwins. Perhaps. Initially, we should focus on the components (data, software, models, workflow. Debunking reliability myths of PIDs for Digital Specimens – DiSSCoTech. In this post I address an erroneous assertion – a myth perhaps, that the proposed Digital Specimen Architecture relies heavily on a centralized resolver and registry for persistent identifiers that is inherently not distributed and that this makes the proposed “persistent” identifiers (PID) for Digital Specimens unreliable. By unreliable is meant link rot (‘404 not found’) and/or content drift (content today is not the same as content yesterday). This assertion and its concerns (myths) came during a lively Q&A and associated ‘chat’ that took place while I was presenting the recent progress in development of the openDS standard at the virtual TDWG 2020 SYM07 symposium this week.

I want to show that any such issues are not those of the persistent identifier scheme itself or its associated service provider organizations but are usually human failings and inadequacies in the management and procedures adopted by users of such schemes. Myth: doi.org is centralized Like this: Like Loading...

Stats - sfg.taxonworks.org. Posters for TDWG 2020 - TDWG. An analysis of data paper templates and guidelines: types of contextual information described by data journals. Introduction Data sharing is an emerging scholarly communication practice that facilitates the progress of science by making data accessible, verifiable, and reproducible [1]. There are several ways of sharing data, including personally exchanging data sets, posting data on researchers’ or laboratories’ websites, and depositing data sets in repositories.

A relatively novel means of releasing data sets is the publication of data papers, which describe how data were collected, processed, and verified, thereby improving data provenance [2]. Data papers are published by data journals, and the publication process is similar to that of conventional journals, in that data papers and data are both peer-reviewed, amended, and publicly accessible under unique identifiers [3]. Since data papers take the form of academic papers and can be cited by primary research articles, credit can be given to data creators [4]. Go to : Methods Table 1. Results Table 2. Table 3. Table 4. Discussion Conflict of Interest. A benchmark dataset of herbarium specimen images with label data.

Costbook of the digitisation infrastructure of DiSSCo. A workflow for standardising and integrating alien species distribution data. 5 essential tools for nature conservation we are still missing (Part 1/2) Small Collections Network. The Impact of Brazil’s Virtual Herbarium in e-Science. By Dora Ann Lange Canhos1, Sidnei de Souza1, Alexandre Marino1, Vanderlei Perez Canhos1, and Leonor Costa Maia2 1Centro de Referência em Informação Ambiental – CRIA 2Universidade Federal de Pernambuco, UFPE Summary: Herbarium, a collection of preserved samples of plants and fungi and associated data, is key documentation of the biodiversity of the past and an important instrument with which to model the biodiversity of the future. If prepared and maintained correctly, these specimens hold their scientific value for centuries.Comparing Brazil’s collection of herbaria with that of Europe or the USA demonstrates a significant difference in the size of their holdings.

A herbarium can be defined as a collection of preserved samples of plants and fungi and associated data. Through herbaria data one can analyze species’ distribution across both time and space. An important indicator is the movement of data (entry and removal) in the network, showing its dynamic nature. New ALA strategy for 2020-2025 – Atlas of Living Australia. Today we release our Atlas of Living Australia Strategy 2020-2025. The Atlas of Living Australia (ALA) strategy has been shaped extensively by input from our national and international partners who contributed so actively to our 2019 ALA Future Directions national consultation process. As Australia’s national biodiversity data infrastructure and one of the world’s foremost such capabilities, we rely on the strength of our partnerships with data providers, users and stakeholders.

Indeed, the genesis of the ALA was built on the strength and richness of existing relationships within the museums, collections and herbaria communities. Australia’s fruitful partnership with the Global Biodiversity Information Facility (GBIF) also provides our community a unique opportunity to ensure that local, regional or national biodiversity data delivers impact globally. The ALA is particularly proud of the relationship we play hosting the Australian node of GBIF.

Case Study: Brazilian Virtual Herbarium. Data Management Plan: Brazil's Virtual Herbarium. The Tragedy of #OpenData - Comprehension 360. It Is A Commons Tale (no, that s is not a typ-o) I was asked recently about Open Data initiatives. Off-hand I gave one of my typical rough, if stylized replies — “nonsense”. Put simply — I have yet to find an Open Data set that contained any real value OR was not readily accessible to me without #OpenData frameworks. Now — if that opinion pisses you off, pay attention. As is most often the case, I tend to follow-up rough statements of opinion with additional research. Did I change my mind? Open Data is not Open Source I am a huge fan of both analogies and Open Source. There is a huge difference between property and intellectual property, between resources and ideas. Data on the other hand, is a resource. The Tragedy Of The Commons Is A Strong Analogy We are coming up on the two century anniversary of William Forster Lloyd’s tragedy of the commons.

The grass in the commons and data have much in common. Is Data’s Value Really Used Up? Let’s start with a different question. Is all hope lost? Born-digital collection software. Biologists conducting field research, such as floristic studies, accession thousands of specimens into natural history collections. Many of these specimens’ digital records are now becoming available through online portals such as iDigBio ( the Global Biodiversity Information Facility (GBIF) (Global Biodiversity Information Facility, 2018; Symbiota (Gries et al., 2014; and regional consortia (e.g., SouthEast Regional Network of Expertise and Collections [SERNEC]; One major challenge in digitizing these specimens is the accurate transcription of physical labels into digital formats.

Numerous workflows have been presented to address this challenge, whereby citizen scientists, students, or professionals are tasked with transcribing these data (Hill et al., 2012; Ellwood et al., 2015; Harris and Marsico, 2017; Sweeney et al., 2018). CollNotes development Structured data BOX 1. Green digitization: Online botanical collections data answering real-world questions - Soltis - 2018 - Applications in Plant Sciences. Herbarium data: Global biodiversity and societal botanical needs for novel research - James - 2018 - Applications in Plant Sciences. The Australasian Virtual Herbarium: Tracking data usage and benefits for biological collections - Cantrill - 2018 - Applications in Plant Sciences. ePlant: Visualizing and exploring multiple levels of data for hypothesis generation. ePlant Steps into the Breach for Plant Researchers. ePlant: Data Visualization Tools for Plant Data. CyVerse: Meeting Those Midnight Computing Needs.

CyVerse | Cyberinfrastructure for Data Management and Analysis. Figshare: Research platform for biodiersity discovery. IPBES Data Management Policy. Where is Web Science? From 404 to 200. PhyloJive – Integrating biodiversity data with phylogenies | Atlas of Living Australia. Data mining and machine learning to identify collectors and collecting trips. Towards a biodiversity knowledge graph. Pensoft journals integrated with Catalogue of Life to help list the species of the world. Automated pipeline for nomenclatural acts. Confusion: The Biodiversity Informatics Landscape. Management, Archiving, and Sharing for Biologists and the Role of Research Institutions in the Technology-Oriented Age | BioScience.

Imago at Indiana U, links library and natural history databases. Cross-Linking NCBI (DNA) & EMu Records. Integration of Big Data and the Science of the Christmas Tree. Unmet Needs for Analyzing Biological Big Data: A Survey of 704 NSF PIs. RainBio: Using the “Natural History Large Hadron Collider” to tell us about plant diversity.