Moore's law Moore's law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper. His prediction has proven to be accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. All of these are improving at roughly exponential rates as well. This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the world economy. Moore's law describes a driving force of technological and social change in the late 20th and early 21st centuries. History
Small-world network Small-world network exampleHubs are bigger than other nodes Average vertex degree = 1,917 Average shortest path length = 1.803. Clusterization coefficient = 0.522 Random graph Average vertex degree = 1,417 Average shortest path length = 2.109. In the context of a social network, this results in the small world phenomenon of strangers being linked by a mutual acquaintance. Properties of small-world networks This property is often analyzed by considering the fraction of nodes in the network that have a particular number of connections going into them (the degree distribution of the network). ) is defined as R. Examples of small-world networks Small-world properties are found in many real-world phenomena, including road maps, food chains, electric power grids, metabolite processing networks, networks of brain neurons, voter networks, telephone call graphs, and social influence networks. Examples of non-small-world networks Network robustness See also References
IT Conversations: Noshir Contractor Recent technological advances in hardware and software, broadband connectivity, and the decreasing cost of computers, cell phones, and other such devices have created an environment where we can connect with anyone, anytime, anywhere almost effortlessly. However, how do we determine with whom we want to connect? The answers to this question can be found by studying the underlying socio-technological motivations for the creation, maintenance, destruction, and reconstitution of knowledge and social networks. Dr. Dr. IT Conversations' publication of this program is underwritten by your donations and: Dr. He is the author of over 250 research papers in communication. Dr. Resources: This presentation is one of a series from the MeshForum 2005 Event held in Chicago, Il, May 1-4, 2005. For Team ITC: Description editor: Chase SouthardPost-production audio engineer: Stuart HunterSeries Editor: Cori Schlegel
Disruptive technology Sustaining innovations are typically innovations in technology, whereas disruptive innovations cause changes to markets. For example, the automobile was a revolutionary technological innovation, but it was not a disruptive innovation, because early automobiles were expensive luxury items that did not disrupt the market for horse-drawn vehicles. The market for transportation essentially remained intact until the debut of the lower priced Ford Model T in 1908. The current theoretical understanding of disruptive innovation is different from what might be expected by default, an idea that Clayton M. The work of Christensen and others during the 2000s has addressed the question of what firms can do to avoid displacement brought on by technological disruption. History and usage of the term The term disruptive technologies was coined by Clayton M. The theory Christensen defines a disruptive innovation as a product or service designed for a new set of customers. See also
Scale-free network A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as where is a parameter whose value is typically in the range 2 < < 3, although occasionally it may lie outside these bounds. History In studies of the networks of citations between scientific papers, Derek de Solla Price showed in 1965 that the number of links to papers—i.e., the number of citations they receive—had a heavy-tailed distribution following a Pareto distribution or power law, and thus that the citation network is scale-free. Barabási and Albert proposed a generative mechanism to explain the appearance of power-law distributions, which they called "preferential attachment" and which is essentially the same as that proposed by Price. (that is, the number of edges incident to ) by . Characteristics Random network (a) and scale-free network (b).
&G/localization: When Global Information a danah boyd O'Reilly Emerging Technology Conference March 6, 2006"G/localization: When Global Information and Local Interaction Collide" [This is a rough crib of the actual talk.] Citation: boyd, danah. 2006. "G/localization: When Global Information and Local Interaction Collide." O'Reilly Emerging Technology Conference, San Diego, CA. Good afternoon. My talk today is about "glocalization," one of the most grotesque words that academics have managed to coin. Glocalization is the ugliness that ensues when the global and local are shoved uncomfortably into the same concept. I want to talk about what it means to connect the global and local together in technology and how this affects the design process. The digital era has allowed us to cross space and time, engage with people in a far-off time zone as though they were just next door, do business with people around the world, and develop information systems that potentially network us all closer and closer every day.
Geoffrey Moore - Dealing With Darwin Random graph Random graph models A random graph is obtained by starting with a set of n isolated vertices and adding successive edges between them at random. The aim of the study in this field is to determine at what stage a particular property of the graph is likely to arise. Different random graph models produce different probability distributions on graphs. Most commonly studied is the one proposed by Edgar Gilbert, denoted G(n,p), in which every possible edge occurs independently with probability 0 < p < 1. . A closely related model, the Erdős–Rényi model denoted G(n,M), assigns equal probability to all graphs with exactly M edges. with 0 ≤ M ≤ N, G(n,p) has elements and every element occurs with probability . The latter model can be viewed as a snapshot at a particular time (M) of the random graph process , which is a stochastic process that starts with n vertices and no edges, and at each step adds one new edge chosen uniformly from the set of missing edges. that a given edge .
The Social Networking Faceoff Written by Alex Iskold and edited by Richard MacManus. With all this buzz around the potential Yahoo! acqusition of Facebook for $1Billion, we think it's time to do the social networking faceoff. Arguably of all services in the new social era, the social network sites hold most promise. Another natural trend that we are seeing in the space is demographic segmentation. In addition to those three, Bebo is making some major waves and has surpassed MySpace in UK and New Zealand. The Social Network Faceoff Chart (*) The active user estimate is based on the assumption that MySpace has 70M users. Traffic Dynamics We can gain additional insights by looking at the traffic dynamics over the past year. Orkut is rising? From the charts we clearly see that Orkut is gaining, but why? But perhaps it is not just that Brazilian users dominate, but also the attitude and confidence of Orkut. Conclusion MySpace is an undisputed leader on all counts, but Orkut is on the rise and it's moving very rapidly.
National University National University Online Library Membership The National University Online Library is a unique and valuable resource for alumni, and includes one of the largest collections of electronic books in the nation. As a student, you had access to a great collection of print and online resources through the National University Library System, including NetLibrary, EBSCO and onsite library resources. Join the Alumni Online Library and you can continue to have access to the following key resources and services: Borrowing books (free delivery outside of the San Diego area).Using online journals with the alumni version of Academic Search Premier (EBSCO).Having access to the NetLibrary e-book collection.
Average path length Concept The average path length distinguishes an easily negotiable network from one, which is complicated and inefficient, with a shorter average path length being more desirable. However, the average path length is simply what the path length will most likely be. Definition with the set of vertices . , where denote the shortest distance between and . if cannot be reached from . is: is the number of vertices in Applications In a real network like the World Wide Web, a short average path length facilitates the quick transfer of information and reduces costs. Most real networks have a very short average path length leading to the concept of a small world where everyone is connected to everyone else through a very short path. As a result, most models of real networks are created with this condition in mind. The average path length depends on the system size but does not change drastically with it. References
Zaadz: Connect. Grow. Inspire. Empower. Clustering coefficient In graph theory, a clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties; this likelihood tends to be greater than the average probability of a tie randomly established between two nodes (Holland and Leinhardt, 1971; Watts and Strogatz, 1998). Two versions of this measure exist: the global and the local. The global version was designed to give an overall indication of the clustering in the network, whereas the local gives an indication of the embeddedness of single nodes. Global clustering coefficient The global clustering coefficient is based on triplets of nodes. Watts and Strogatz defined the clustering coefficient as follows, "Suppose that a vertex has neighbours; then at most edges can exist between them (this occurs when every neighbour of ). over all