background preloader

Toddupton

Facebook Twitter

ToddLamontUpton

Rocket ship builder, pizza expert, loves the Ravens, parent.

Open source. SourceForge - Download, Develop and Publish Free Open Source Software. Wikipedia. List of Linux distributions. List of software distributions using the Linux kernel This page provides general information about notable Linux distributions in the form of a categorized list. Distributions are organized into sections by the major distribution or package management system they are based on. Debian-based[edit] Ubuntu-based[edit] Ubuntu is a distribution based on Debian, designed to have regular releases, a consistent user experience and commercial support on both desktops and servers.[4] Current official derivatives[edit] These Ubuntu variants, also known as Ubuntu flavours, simply install a set of packages different from the original Ubuntu, but since they draw additional packages and updates from the same repositories as Ubuntu, all of the same software is available for each of them.[5][6] Discontinued official derivatives[edit] Unofficial derivatives[edit] Unofficial variants and derivatives are not controlled or guided by Canonical Ltd. and generally have different goals in mind.

Knoppix-based[edit] National Renewable Energy Laboratory. Coordinates: The National Renewable Energy Laboratory (NREL), located in Golden, Colorado, is the United States' primary laboratory for renewable energy and energy efficiency research and development. The National Renewable Energy Laboratory (NREL) is a government-owned, contractor-operated facility; it is funded through the U.S. Department of Energy (DOE). This arrangement allows a private entity to operate the lab on behalf of the federal government under a prime contract. History[edit] Established in 1974,[3][Note 1] NREL began operating in 1977 as the Solar Energy Research Institute.

The NREL was designated a national laboratory of the U.S. NREL's areas of research and development expertise are renewable electricity, renewable fuels, integrated energy systems, and strategic energy analysis.[8] Funding in 2009 and 2010[edit] The National Renewable Energy Laboratory projects that the levelized cost of wind power will decline about 25% from 2012 to 2030.[9] Work for others[edit]

Census Bureau Homepage. Google Earth. Google Earth is a virtual globe, map and geographical information program that was originally called EarthViewer 3D created by Keyhole, Inc, a Central Intelligence Agency (CIA) funded company acquired by Google in 2004 (see In-Q-Tel). It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and geographic information system (GIS) 3D globe.

It was originally available with three different licenses, but has since been reduced to just two: Google Earth (a free version with limited function) and Google Earth Pro ($399 per year), which is intended for commercial use.[4] The third original option, Google Earth Plus, has been discontinued.[5][6] For other parts of the surface of the Earth, 3D images of terrain and buildings are available. Google Earth uses digital elevation model (DEM) data collected by NASA's Shuttle Radar Topography Mission (SRTM).[11] This means one can view the whole earth in three dimensions. Detail[edit] Uses[edit] Mars[edit] Keyhole Markup Language. Structure[edit] The KML file specifies a set of features (place marks, images, polygons, 3D models, textual descriptions, etc.) for display in Here Maps, Google Earth, Maps and Mobile, or any other geospatial software implementing the KML encoding. Each place always has a longitude and a latitude.

Other data can make the view more specific, such as tilt, heading, altitude, which together define a "camera view" along with a timestamp or timespan. KML shares some of the same structural grammar as GML. Some KML information cannot be viewed in Google Maps or Mobile.[4] An example KML document is: <? The MIME type associated with KML is application/vnd.google-earth.kml+xml; the MIME type associated with KMZ is application/vnd.google-earth.kmz. Geodetic reference systems in KML[edit] For its reference system, KML uses 3D geographic coordinates: longitude, latitude and altitude, in that order, with negative values for west, south and below mean sea level if the altitude data is available.

Search engine indexing. Popular engines focus on the full-text indexing of online, natural language documents.[1] Media types such as video and audio[2] and graphics[3] are also searchable. Meta search engines reuse the indices of other services and do not store a local index, whereas cache-based search engines permanently store the index along with the corpus. Unlike full-text indices, partial-text services restrict the depth indexed to reduce index size. Larger services typically perform indexing at a predetermined time interval due to the required time and processing costs, while agent-based search engines index in real time. Indexing[edit] The purpose of storing an index is to optimize speed and performance in finding relevant documents for a search query.

Index design factors[edit] Major factors in designing a search engine's architecture include: Merge factors Storage techniques How to store the index data, that is, whether information should be data compressed or filtered. Index size Lookup speed Maintenance. Geospatial intelligence. Geospatial intelligence, GEOINT (GEOspatial INTelligence), GeoIntel (Geospatial Intelligence), or GSI (GeoSpatial Intelligence) is intelligence about the human activity on earth derived from the exploitation and analysis of imagery and geospatial information that describes, assesses, and visually depicts physical features and geographically referenced activities on the Earth.

GEOINT consists of imagery, imagery intelligence (IMINT) and geospatial information.[1] Amplified definition[edit] GEOINT encompasses all aspects of imagery (including capabilities formerly referred to as Advanced Geospatial Intelligence and imagery-derived MASINT) and geospatial information and services (GI&S); formerly referred to as mapping, charting, and geodesy). Geospatial data, information, and knowledge[edit] Relationship to other "INTs"[edit] Other factors[edit] It has also been suggested[by whom?]

De facto definition[edit] Geospatial Intelligence is a field of knowledge, a process, and a profession. Map Overlay and Statistical System. The Map Overlay and Statistical System (MOSS), is a GIS software technology. Development of MOSS began in late 1977 and was first deployed for use in 1979. MOSS represents a very early public domain, open source GIS development - predating the better known GRASS by 5 years. MOSS utilized a polygon based data structure in which point, line, and polygon features could all be stored in the same file. The user interacted with MOSS via a command line interface. History[edit] In the middle 1970s, coal-mining activities required Federal agencies to evaluate the impacts of strip mine development on wildlife and wildlife habitat.

In 1976, the US Fish and Wildlife Service (FWS) issued a Request For Proposals (RFP) for developing a Geographic Information System [GIS] for environment impact and habitat mitigation studies. For the first six months of 1977, the project team worked on two tasks: A User Needs Assessment and an Inventory of Existing GIS technology. Architecture[edit] References[edit] Multiple Satellite Imaging. Multiple Satellite Imaging is the process of using multiple satellites to gather more information than a single satellite so that a better estimate of the desired source is possible.

So something that you can't see with one telescope might be something you can see with two or more telescopes. Background[edit] Interferometry is the process of combining waves in such a way that they constructively interfere. That is to say that when two or more independent sources detect a signal at the same given frequency those signals can be combined and the result is better than each one individually. An overview of Astronomical interferometers and a History of astronomical interferometry can be referenced from their respective pages.

The NASA Origins Program was created in the 1990s to ultimately search for the origin of the universe. There is also the constant search for life in other worlds. Future[edit] References[edit] Blair, Bill and Humberto Calvani. Research and development. Cycle of research and development The research and development (R&D, also called research and technical development or research and technological development, RTD in Europe) is a specific group of activities within a business. The activities that are classified as R&D differ from company to company, but there are two primary models. In one model, the primary function of an R&D group is to develop new products; in the other model, the primary function of an R&D group is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services.

Under both models, R&D differs from the vast majority of a company's activities which are intended to yield nearly immediate profit or immediate improvements in operations and involve little uncertainty as to the return on investment (ROI). Background[edit] Business[edit] Present-day R&D is a core part of the modern business world. Hacker (computer security) Bruce Sterling traces part of the roots of the computer underground to the Yippies, a 1960s counterculture movement which published the Technological Assistance Program (TAP) newsletter.

[citation needed] TAP was a phone phreaking newsletter that taught techniques for unauthorized exploration of the phone network. Many people from the phreaking community are also active in the hacking community even today, and vice versa. [citation needed] Several subgroups of the computer underground with different attitudes use different terms to demarcate themselves from each other, or try to exclude some specific group with which they do not agree. According to Ralph D. Clifford, a cracker or cracking is to "gain unauthorized access to a computer in order to commit another crime such as destroying information contained in that system".[6] These subgroups may also be defined by the legal status of their activities.[7] A grey hat hacker is a combination of a black hat and a white hat hacker. Rootkit. Penetration test. Method of evaluating computer and network security by simulating a cyber attack A penetration test, colloquially known as a pen test or ethical hacking, is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system;[1][2] this is not to be confused with a vulnerability assessment.[3] The test is performed to identify weaknesses (also referred to as vulnerabilities), including the potential for unauthorized parties to gain access to the system's features and data,[4][5] as well as strengths,[6] enabling a full risk assessment to be completed.

Security issues that the penetration test uncovers should be reported to the system owner.[9] Penetration test reports may also assess potential impacts to the organization and suggest countermeasures to reduce the risk.[9] Penetration tests are a component of a full security audit. Several standard frameworks and methodologies exist for conducting penetration tests. History[edit] Tools[edit] Payload[edit] WiFi Pineapple | Home. Penetration Testing Software | Metasploit. Kali Linux. Kali Linux is a Debian-derived Linux distribution designed for digital forensics and penetration testing.

It is maintained and funded by Offensive Security Ltd. It was developed by Mati Aharoni and Devon Kearns of Offensive Security through the rewrite of BackTrack, their previous forensics Linux distribution.[1] Kali Linux is distributed in 32- and 64-bit images for use on hosts based on the x86 instruction set, and as an image for the ARM architecture for use on the Raspberry Pi computer and on Samsung's ARM Chromebook.[3] References[edit] External links[edit] .:: Phrack Magazine ::. 2600: The Hacker Quarterly. Data mining. Process of extracting and discovering patterns in large data sets Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.[1] Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use.[1][2][3][4] Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.[5] Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.[1] Etymology[edit] Background[edit] The manual extraction of patterns from data has occurred for centuries.

Process[edit] Data center. An operation engineer overseeing a network operations control room of a data center A data center is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and various security devices. Large data centers are industrial scale operations using as much electricity as a small town[1] and sometimes are a significant source of air pollution in the form of diesel exhaust.[2] History[edit] Data centers have their roots in the huge computer rooms of the early ages of the computing industry.

Early computer systems were complex to operate and maintain, and required a special environment in which to operate. The boom of data centers came during the dot-com bubble. Requirements for modern data centers[edit] Racks of telecommunications equipment in part of a data center. Big data. Visualization of daily Wikipedia edits created by IBM.

At multiple terabytes in size, the text and images of Wikipedia are an example of big data. Growth of and Digitization of Global Information Storage Capacity Source Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, curation, search, sharing, storage, transfer, visualization, and information privacy. The term often refers simply to the use of predictive analytics or other certain advanced methods to extract value from data, and seldom to a particular size of data set. Analysis of data sets can find new correlations, to "spot business trends, prevent diseases, combat crime and so on.

Work with big data is necessarily uncommon; most analysis is of "PC size" data, on a desktop PC or notebook[11] that can handle the available data set. Definition[edit] Characteristics[edit] Big data can be described by the following characteristics: Democracy. Labor unions in the United States. United Brotherhood of Carpenters and Joiners of America. United Association. International Brotherhood of Electrical Workers. Apprenticeship. Lineman (occupation) Welder. Free climbing. Commercial driver's license. National Electrical Contractors Association Association of Electrical Contractors Washington DC. Electric utility. Smart grid. Battery room. Alternative energy. Healy Clean Coal Project. Distributed generation. Geothermal electricity. OutBack Power / Home.

Manley Hot Springs, Alaska. Vertical axis wind turbine. Susitna Hydroelectric Project. Wind power in Alaska. Eva Creek Wind - GVEA - Golden Valley Electric Association. Amateur radio. Programmer. Robotics. Raspberry Pi | An ARM GNU/Linux box for $25. Take a byte! Arduino - HomePage. Agricultural robot. Mesh networking. Cloud Controller - Solutions. Www.dd-wrt.com | Unleash Your Router. List of wireless community networks by region. Project Loon. AutoCAD. Electron beam direct manufacturing. 3D printing.

Materials science. DEFCAD. Thingiverse - Digital Designs for Physical Objects. 3D scanner.