What is continuous integration (CI)? - Definition from WhatIs.com What is continuous integration (CI)? Continuous integration (CI) is a software engineering practice in which isolated changes are immediately tested and reported on when they are added to a larger code base. The goal of CI is to provide rapid feedback so that if a defect is introduced into the code base, it can be identified and corrected as soon as possible. Continuous integration software tools can be used to automate the testing and build a document trail. Continuous integration has evolved since its conception. According to Paul Duvall, co-author of Continuous Integration: Improving Software Quality and Reducing Risk, best practices of CI include: Committing code frequently. CI originated from within the extreme programming paradigm but the principles can be applied to any iterative programming model, such as agile programming. This was last updated in July 2008 Email Alerts Register now to receive SearchSoftwareQuality.com-related news, tips and more, delivered to your inbox.
Meta Build Systems This is a story about my search for a hassle-free cross-platform open source (meta-/meta-meta-) build system. For our open source Bullet physics engine, I’ve been distributing the source code in a way that should make it as easy as possible to build out-of-the box. This means that for all supported platforms, the user (a developer who downloaded the Bullet SDK) should be able to download and unzip the zipfile or tarball on his machine, and get started as soon as possible. It should be hassle-free for the user but also for me, so I rather don’t manual updating too many files for each release. Aside from all the different platforms, we need to also support various compilers, compiler settings and integrated developer environments (IDEs). Here are a couple of build systems and modifications that I tried or considered to try: Visual Studio project files. autotools: for most unix flavors, autotools does a great job.
What is Jidoka ? Jidoka is a Japanese term used for automation and being widely used in Toyota Production System (TPS), Lean Manufacturing and Total Productive Maintenance (TPM). Concept is to authorize the machine owner (operator) and in any case if a problem occurs on flow line, operator can stop the flow line. Ultimately defective pieces will not move to the next station. This concept minimizes the defects, over production and minimizes wastes. Also its focus is to understand the causes of problems and then taking preventive action to reduce them.History of Jidoka is back in early 1900’s, when first loom was stopped due to breakage of thread. The concept of automated line is being used to relieve workers and minimize human related errors. The purpose of Jidoka implementation is to diagnose the defect immediately and correct it accordingly. Jidoka is being effectively used in TPM, Lean Manufacturing and providing substantial benefits to the organizations.
George Dinwiddie's blog » How easy is it for your programmers to fix problems? A programmer, writing some new code, looks into some existing code that she needs to use. Something doesn’t look quite right. In fact, there’s a bug. Whether no one’s triggered it, or they have but their complaints haven’t reached anyone who will do something about it, is hard to say. In such a situation, I would prefer to write a new test illustrating the bug, fix it, and check both the test and the fix into source control. Maybe, however, there are policies, either explicit or tacit, that prevent such quick resolution. Perhaps there is a “ticketing” system that requires opening a formal change ticket before introducing a change. Perhaps someone else is the “owner” of this code, and you need to ask their permission before fixing it. I’ve often seen organizations that lock down their “architectural framework” code on the assumption that only their most senior developers can be trusted to work on it. Perhaps various teams have to analyze what effect the change will have on their code.
Pitfalls In Automation The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community. The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. Invariably, when we start talking about quality assurance and testing it’s not long before the talk turns to automation. Common Assumptions – Our First Mistake On my early projects I had my hand in numerous automation strategies both at BioWare and at other studios. It wasn’t until Dragon Age: Origins that I realized that I, like so many engineers I think, had been operating under a number of mistaken assumptions. The Cost of Testing For starters, testing is a function of Quality Control, not Quality Assurance. Perhaps automated testing might win out on cost, but certainly not on value. Additionally, automation is software too, and that means it is as prone to defects and in as much need of testing as any other software. Defect Prevention, Not Detection New Assumptions and the Road to Success
DevOps Kata - Single Line of Code - devopsy Code Kata is an attempt to bring this element of practice to software development. A kata is an exercise in karate where you repeat a form many, many times, making little improvements in each. The intent behind code kata is similar. Since DevOps is a broad topic, it can be difficult to determine if a team has enough skills and is doing enough knowledge sharing to keep the Bus Factor low. Single Line of Code Goal: Deploy a change that involves a single line of code to production. Exercise If you have a non-trivial application, or set of related systems, then the time may vary depending on which line of code you touched. Change the title of your homepageChange a line of code that is only executed once (i.e.: application initialization code)Change a single line of code within a potential performance bottleneckChange a line of code in your infrastructure automation (e.g., puppet, chef or ansible) Honing the skill Watch for waste Similarly, the deployment itself can attract types of waste.
Unit testing part 1 – Unit tests by the book If you are a developer, I assume you have heard about unit tests. Most of you probably even wrote one in your life. But how many of you have ever considered what make a unit test a good unit test? Unit test frameworks are just (fairly) simple runners that invoke a list of methods, one after another. But the code that is actually going to be executed in that method is none of the frameworks concern. How nice would it be to have a list of your favorite pizzas with an option of home delivery just by double clicking it. A unit test framework is just a tool for writing tests but not every test written in this framework will be a (proper) unit test. So what’s wrong with that test? Let’s start with the scope of the test. How do I verify that then? Let’s assume the pizzeria has an online system for taking orders and our application has to send an HTTP request to order pizzas. To decouple your modules use interfaces! What if the payment is done by 3rd party system? Great… anything else? Summary
spinroot [Current Tool Version: 2.13 -- 26 October 2007] See also static.html for an overview of currently available static analyzers. Uno is a simple tool for source code analysis, written in 2001. It has two main goals: It is designed to intercept primarily the three most common types of software defects: Use of uninitialized variable, Nil-pointer references, and Out-of-bounds array indexing. Static analysis tools commonly suffer from a far too low signal-to-noise ratio: they tend to produce voluminous output that consists predominantly of false alarms: instances where the analyzer cannot determine the safety of the code and throws the problem back to the programmer. Uno tries to be more specific, by concentrating exclusively on the three most commonly occuring, and most pernicious, defects in ANSI-C code. Example 1: $ num expr.c 1 int *ptr; 2 3 void 4 main(void) 5 { 6 if (ptr) 7 *ptr = 0; 8 if (! To illustrate the process, two examples of user-defined Uno properties are shown below. Example 2:
Software Builds at EA: The 5000' View A couple of days ago this tweet by John Carmack popped up. In case the link ever goes away, he says, "Dealing with all the implicit state in the filesys and environment is probably going to be key to eventually making build systems not suck." At EA we have spent a lot of time developing build systems that don't suck. They're not perfect but as we develop applications and games targeting platforms from Android to Xbox, plus everything in between, they work incredibly well for us. Framework The cornerstone of our build infrastructure is a version of NAnt that was forked eons ago and has undergone essentially a complete rewrite. NAnt is just one piece of a system called Framework. Module A module is Framework's representation of an artifact producing build process. Package Packages are a collection of modules. Dependencies Dependency handling is one of Framework's killer features. Masterconfig Versioned packages lose their utility if there is no way to enforce the versions being used.
Thinking Inside the Container | RG Engineering Containers have taken over the world, and I, for one, welcome our new containerized overlords. They do, however, present interesting challenges for me and my fellow Rioters on the Pipeline Engineering team. My name is Maxfield Stewart, and I'm an engineer here at Riot. Lately, we've been adding Docker containers to that mix at a furious pace. The list of questions goes on for quite a while—kind of like an Ashe arrow. First, let me rewind to a year ago when we brought continuous delivery to League of Legends. A League of Legends build is no joke. Still, the process wasn’t perfect: as with all software, at times older tools required shims and bridges to make everything work together. We believe that engineering teams have to be able to totally own their technology stacks, down to administrative level control of their build environments.We believe in Configuration as Code. To achieve these goals we needed a world class tech stack that could work within the range of these concepts.
The Forgotten Layer of the Test Automation Pyramid Even before the ascendancy of agile methodologies like Scrum, we knew we should automate our tests. But we didn’t. Automated tests were considered expensive to write and were often written months, or in some cases years, after a feature had been programmed. One reason teams found it difficult to write tests sooner was because they were automating at the wrong level. An effective test automation strategy calls for automating tests at three different levels, as shown in the figure below, which depicts the test automation pyramid. At the base of the test automation pyramid is unit testing. Automated user interface testing is placed at the top of the test automation pyramid because we want to do as little of it as possible. All applications are made up of various services. For more on Scrum and agile testing, pick up a copy of Succeeding with Agile.