Designing robust web API Many companies over the past years have been exposing their services via web service API. Some are doing a better job than the other. Once the API s publicly available, people will write code based on it, every change will break the client and make them unhappy. In this post, I summarize some good practices that I have used successfully in my previous projects. Use a RESTful Model Design the web service API model on HTTP, REST and JSON format. Restful URL typically comes with the collection URI as well the individual URI. List all persons. A good API should focus to do one thing well, rather than multiple things of different purposes. It should also be very intuitive to use and difficult to make undetected mistakes without needing to read through the documentation. On the development process side, we should let more people to review the API design, and perhaps run some pilot client projects before exposing it to the general public.
Comparing STRIDE-RDM, REDCap and Medrio - Services - SCCI - Stanford Medicine Stanford School of Medicine currently offers three research data capture database development platforms, all of which are recommended over the more traditional choices of Excel, Filemaker and Access, since all three satisfy the HIPAA Privacy Rule requirements, while Excel, Filemaker and Access are much harder to use in a compliant fashion. The STRIDE Research Data Management (STRIDE-RDM) is the application development platform of the Stanford Translational Research Integrated Database Environment. It supports the design of custom research data capture systems and also offers integration with the STRIDE Clinical Data Warehouse, which contains both current and historic clinical data from both LPCH and SHC. STRIDE-RDM research data management systems are all custom-built by SCCI application developers in exchange for salary support to cover the time spent building the database. Comparative analysis Deciding which tool to use Does your study involve data entry from multiple sites? Medrio
Entrypoint :: System Architecture Entrypoint i4 uses a standards-based cross-platform design. The extensible design permits adding or changing functionality with relative ease. Because it is built on the Java Platform, server and client applications run on any Windows, Mac or Unix operating system supporting the Java Runtime Environment (JRE) version 7.0 (32 or 64 bit) or greater. Users access the system either via the WebClient application on a Web browser or via local applications (Application Studio, Desktop System Manager, and Desktop Workstation), which communicate with the server over a proprietary TCP/IP protocol (EPXP). A Web Services API is also exposed, allowing integration with third-party systems. The server component executes within a Java Application Server that supports Java Servlets (Apache Tomcat, JBoss, Jetty, GlassFish, IBM WebSphere Application Server, etc.). The system uses a modular, plug-in based back-end storage design, which allows for flexibility when deploying to software environments.
Functional Dependency (Normalization) Functional Dependency (Normalization) Asad Khailany, Dsc. The concept of functional dependency (also known as normalization was introduced by professor Codd in 1970 when he defined the first three normal forms (first, second and third normal forms). Normalization is used to avoid or eliminate the three types of anomalies (insertion, deletion and update anomalies) which a database may suffer from. First Normal Form: A relation is in first normal form if all its attributes are simple. Example -1. Student-courses (Sid:pk, Sname, Phone, Courses-taken) Where attribute Sid is the primary key, Sname is student name, Phone is student's phone number and Courses-taken is a table contains course-id, course-description, credit hours and grade for each course taken by the student. Course-taken (Course-id:pk, Course-description, Credit-hours, Grade) Student-courses St-100-Course-taken St-200-Course-taken St-300-Course-taken Definition of the three types of anomalies: Course-description, Credit-hours, Grade)
1NF, 2NF, 3NF and BCNF in Database Normalization Normalization of Database Normalization is a systematic approach of decomposing tables to eliminate data redundancy and undesirable characteristics like Insertion, Update and Deletion Anamolies. It is a two step process that puts data into tabular form by removing duplicated data from the relation tables. Normalization is used for mainly two purpose, Eliminating reduntant(useless) data.Ensuring data dependencies make sense i.e data is logically stored. Problem Without Normalization Without Normalization, it becomes difficult to handle and update the database, without facing data loss. Normalization Rule Normalization rule are divided into following normal form. First Normal FormSecond Normal FormThird Normal FormBCNF First Normal Form (1NF) A row of data cannot contain repeating group of data i.e each column must have a unique value. Student Table : You can clearly see here that student name Adam is used twice in the table and subject math is also repeated. New Student Table : Subject Table :
SQL Relational Algebra Examples The Structured Query Language (SQL) is the common language of most database software such as MySql, Postgresql, Oracle, DB2, etc. This language translates the relational theory into practice but imperfectly, SQL is a language that is a loose implementation of relational theory and has been further modified in its actual implementation by the Relational Database Management System (RDBMS) software that uses it. There is lots of literature and discussion of relational theory and its application in product and software design. Authors like Chris Date, Hugh Darwen and Fabian Pascal write extensively on the topic at they are relational theorists that advocate strict standards in design and implementation and have written lots of books on relational databases and design. Relational algebra and relational calculus are the mathematical basis for relational databases developed by E.F. Difference: Exclude rows common to both tables. Division: Partition: Intersection:
big data The Evolving Panorama of Data Rebecca Parsons and Martin Fowler Our keynote at QCon London 2012 looks at the role data is playing in our lives (and that it's doing more than just getting bigger). We start by looking at how the world of data is changing: its growing, becoming more distributed and connected. We then move to the industry's response: the rise of NoSQL, the shift to service integration, the appearance of event sourcing, the impact of clouds and new analytics with a greater role for visualization. We take a quick look at how data is being used now, with a particular emphasis from Rebecca on data in the developing world. 18 April 2012 video A Proof-of-Concept of BigQuery by Ian Cartwright Can Google’s new BigQuery service give customers Big Data analytic power without the need for expensive software or new infrastructure? 4 September 2012 article ProbabilisticIlliteracy 5 November 2012 bliki Thinking about Big Data 29 January 2013 infodeck Introduction to NoSQL
FAQ: Using a plugin to connect to a database How do I connect to a database by using a Stata plugin? ODBC vs. plugin The easiest way to import data from a database directly into Stata is to use the odbc command. the odbc command will not work on your operating system (Solaris), there is not an ODBC driver for the database in question, or ODBC is too slow. If you encounter any of the above problems, you can use a Stata plugin to import and export your data directly to your database if your database has an application programming interface (API). This FAQ assumes that you have read and understood the FAQ on Stata plugins at the following URL: The example will use ANSI C as the plugin langauge and gcc as the compiler. Here I will connect Stata to a MySQL database. Create a test database First, you need to create a test database in MySQL. GRANT ALL PRIVILEGES ON stata_test.* TO 'user'@'localhost' IDENTIFIED BY 'password'; Note: Change user, localhost, and password in the above command. from your terminal.
Creating and using Stata plugins Creating and using Stata plugins In Stata 8.1 or higher, you can create, load, and execute your own Stata plugins. Make sure that you are using the latest version of this document, which can be identified by the date at the top of this page. Note that any new features will be backwards compatible, meaning that plugins created using this current documentation should continue to work under newer versions of the Stata plugin interface (described in 2. What is the Stata plugin interface?). The following topics are discussed. 1. A plugin is a piece of software that adds extra features to a software package. Plugins can serve as useful and integral parts of Stata user-written commands. When describing plugins, one often uses the term dynamically linked library, or DLL for short. 2. The Stata plugin interface (SPI) is the system by which plugins are compiled, loaded into Stata, and called from Stata. 3. Advantages of plugins: Plugins generally run faster than equivalent Stata ado code. 4. 4a. 5.
Restrict Access to Tables and Fields MSDN Library Design Tools Development Tools and Languages Mobile and Embedded Development Online Services patterns & practices Servers and Enterprise Development Web Development Show: Was this page helpful? Your feedback about this content is important.Let us know what you think. Additional feedback? Thank you! We appreciate your feedback. © 2014 Microsoft PHP: Securing database connection credentials database - What is the difference between 3NF and BCNF?