Lehman's laws of software evolution In Software engineering, the Laws of Software Evolution refer to a series of laws that Lehman and Belady formulated starting in 1974 with respect to Software evolution.[1][2] The laws describe a balance between forces driving new developments on one hand, and forces that slow down progress on the other hand. Context[edit] Observing that most software is subject to change in the course of its existence, the authors set out to determine laws that these changes will typically obey, or must obey in order for the software to survive.[citation needed] In his 1980 article,[1] Lehman qualified the application of such laws by distinguishing between three categories of software: The laws are said to apply only to the last category of systems. The Laws[edit] All told eight laws were formulated: References[edit] ^ Jump up to: a b Lehman, Meir M. (1980).
Responsibility-driven design Responsibility-driven design is a design technique in object-oriented programming. It was proposed by Rebecca Wirfs-Brock and Brian Wilkerson, who defined it as follows: Responsibility-driven design is inspired by the client/server model. It focuses on the contract by asking:What actions is this object responsible for?What information does this object share? Responsibility-driven design is in direct contrast with data-driven design, which promotes defining the behavior of a class along the data that it holds. The client/server model they refer to assumes that a software client and a software server exchange information based on a contract that both parties commit to adhere to. Building blocks[edit] In their book Object Design: Roles, Responsibilities and Collaborations,[1] the authors describe the following building blocks that make up responsibility-driven design. Objects[edit] Objects are described as things that have machinelike behaviors that can be plugged together to work in concert.
GDS design principles GOV.UK is for anyone who has an interest in how UK government policies affect them. Using this style guidance will help us make all GOV.UK information readable and understandable. It has a welcoming and reassuring tone and aims to be a trusted and familiar resource. We take all of the writing for web points into account when we write for GOV.UK. Then we add the following points based on user testing and analysis on our own website. Active voice Use the active rather than passive voice. Addressing the user Address the user as ‘you’ where possible. Avoid duplication What are you and other departments publishing? We have over 116,000 items of content in departmental and policy areas. Duplicate content confuses the user and damages the credibility of GOV.UK content. If there are 2 pieces of information on a subject, perhaps there are 3 and the user has missed one? If something is written once and links to relevant info easily and well, people are more likely to trust the content. Be concise
Abstraction principle (computer programming) When read as recommendation to the programmer, the abstraction principle can be generalized as the "don't repeat yourself" principle, which recommends avoiding the duplication of information in general, and also avoiding the duplication of human effort involved in the software development process. As a recommendation to the programmer, in its formulation by Benjamin C. Pierce in Types and Programming Languages (2002), the abstraction principle reads (emphasis in original):[1] As a requirement of the programming language, in its formulation by David A. Under this very name, the abstraction principle appears into a long list of books. Alfred John Cole, Ronald Morrison (1982) An introduction to programming with S-algol: "[Abstraction] when applied to language design is to define all the semantically meaningful syntactic categories in the language and allow an abstraction over them".[3]Bruce J. Jump up ^ Pierce, Benjamin (2002).
PragPub February 2013 | Estimation is Evil What you don’t know can hurt you, especially when you convince yourself that you do know it. Many Agile teams—I believe it’s most Agile teams—get some improvement from applying Agile values, principles, and practices. These teams do a better job of predicting when they’ll be done, and a better job of being done at the time they predict. They break requirements down into a backlog, they estimate how long items will take, and they burn through that backlog pretty well. Usually by the predicted end of the project, they’re closer to done than they used to be before they went Agile. Agile teams break down their work into a couple of weeks at a time, and often estimate that work. These teams get better transparency, inside the team and with their management and stakeholders. Teams applying Agile ideas almost always get some improvement. Some teams do much better, and we’ll talk about what they do later on. These teams are better than they were, but are still undistinguished. It seems natural.
Design by contract A design by contract scheme The DbC approach assumes all client components that invoke an operation on a server component will meet the preconditions specified as required for that operation. Where this assumption is considered too risky (as in multi-channel client-server or distributed computing) the opposite "defensive design" approach is taken, meaning that a server component tests (before or while processing a client's request) that all relevant preconditions hold true, and replies with a suitable error message if not. History[edit] Design by contract has its roots in work on formal verification, formal specification and Hoare logic. Description[edit] The central idea of DbC is a metaphor on how elements of a software system collaborate with each other on the basis of mutual obligations and benefits. The contract is semantically equivalent to a Hoare triple which formalises the obligations. What does contract expect? Performance implications[edit] Relationship to software testing[edit]
Reactor - a foundation for asynchronous applications on the JVM We’re pleased to announce that, after a long period of internal incubation, we’re releasing a foundational framework for asynchronous applications on the JVM which we’re calling Reactor. It provides abstractions for Java, Groovy and other JVM languages to make building event and data-driven applications easier. It’s also really fast. On modest hardware, it's possible to process over 15,000,000 events per second with the fastest non-blocking Dispatcher. Other dispatchers are available to provide the developer with a range of choices from thread-pool style, long-running task execution to non-blocking, high-volume task dispatching. Reactor, as the name suggests, is heavily influenced by the well-known Reactor design pattern. What is Reactor good for? That’s why the Spring XD project (as well as several other Spring ecosystem projects like Spring Integration and Spring Batch) intend to take advantage of Reactor. Selectors, Consumers and Events To Groovy, with Love Dispatching
Product family engineering Product family engineering (PFE), also known as product line engineering, is a synonym for "domain engineering" created by the Software Engineering Institute, a term coined by James Neighbors in his 1980 dissertation at University of California, Irvine. Software product lines are quite common in our daily lives, but before a product family can be successfully established, an extensive process has to be followed. This process is known as product family engineering. Product family engineering can be defined as a method that creates an underlying architecture of an organization's product platform. It provides an architecture that is based on commonality as well as planned variabilities. Product family engineering is a relatively new approach to the creation of new products. Several studies have proven that using a product family engineering approach for product development can have several benefits (Carnegie Mellon (SEI), 2003). Overall process[edit] Phase 1: product management[edit]
Software development methodology A software development methodology or system development methodology in software engineering is a framework that is used to structure, plan, and control the process of developing an information system. Common methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and extreme programming. A methodology can also include aspects of the development environment (i.e. IDEs), model-based development, computer aided software development, and the utilization of particular frameworks (i.e. programming libraries or other tools). History[edit] The software development methodology (also known as SDM) framework didn't emerge until the 1960s. As a framework[edit] The three basic approaches applied to software development methodology frameworks. A wide variety of such frameworks have evolved over the years, each with its own recognized strengths and weaknesses. As an approach[edit] 1970s 1980s 1990s Approaches[edit]
Waterfall model The unmodified "waterfall model". Progress flows from the top to the bottom, like a cascading waterfall. The waterfall model is a sequential design process, used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design, Construction, Testing, Production/Implementation and Maintenance. The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development.[1] §History[edit] The first known presentation describing use of similar phases in software engineering was held by Herbert D. The first formal description of the waterfall model is often cited as a 1970 article by Winston W. §Model[edit]
Rationalism In epistemology, rationalism is the view that "regards reason as the chief source and test of knowledge"[1] or "any view appealing to reason as a source of knowledge or justification".[2] More formally, rationalism is defined as a methodology or a theory "in which the criterion of the truth is not sensory but intellectual and deductive".[3] Rationalists believe reality has an intrinsically logical structure. Because of this, rationalists argue that certain truths exist and that the intellect can directly grasp these truths. That is to say, rationalists assert that certain rational principles exist in logic, mathematics, ethics, and metaphysics that are so fundamentally true that denying them causes one to fall into contradiction. Philosophical usage[edit] Rationalism is often contrasted with empiricism. Theory of justification[edit] The theory of justification is the part of epistemology that attempts to understand the justification of propositions and beliefs. The other two theses[edit]
Design More formally design has been defined as follows. Another definition for design is a roadmap or a strategic approach for someone to achieve a unique expectation. It defines the specifications, plans, parameters, costs, activities, processes and how and what to do within legal, political, social, environmental, safety and economic constraints in achieving that objective.[3] Here, a "specification" can be manifested as either a plan or a finished product, and "primitives" are the elements from which the design object is composed. With such a broad denotation, there is no universal language or unifying institution for designers of all disciplines. The person designing is called a designer, which is also a term used for people who work professionally in one of the various design areas, usually also specifying which area is being dealt with (such as a fashion designer, concept designer or web designer). Design as a process[edit] The Rational Model[edit] Example sequence of stages[edit] [edit]