DropBox APIs Read our docs Docs are organized by language, from .NET to Swift. Create your app Getting started is simple and quick from the App Console. Test your ideas It's easy to prototype and test examples with our API Explorer. Learn from our examples Photo Watch uses our Swift SDK to let users see their Dropbox photos on Apple Watch. Simple Blog Demo uses our .NET SDK to create a simple blogging platform for uploading and downloading files. Back up and Restore uses our Python SDK to back up user settings and then restore them to a specific point in time. Find out what's new What's new Jul 21, 2016Stack Overflow Documentation for Dropbox APIsWe’re excited to announce that we’ve been working with Stack Overflow on the launch of their new Stack Overflow Documentation. Jun 28, 2016API v1 is now deprecated As of today, Dropbox API v1 is deprecated. Apr 11, 2016Announcing the v1 to v2 migration guideUPDATE JUNE 29, 2016 This post now has been updated to include new information about open issues.
Hook into Wikipedia using Java and the MediaWiki API | Integrating Stuff The Mediawiki API makes it possible for web developers to access, search and integrate all Wikipedia content into their applications. Given that Wikipedia is the ultimate online encyclopedia, there are dozens of use cases in which this might be useful. I used to post a lot of articles about using the webservice APIS of third party sites on this blog. This is going to be another post like that. This post describes how to use the Java Wikipedia API to fetch and format the contents of a Wikipedia article. The Wikipedia API makes it possible to interact with Wikipedia/Mediawiki through a webservice instead of the normal browserbased web interface. We cover a basic use case: getting the contents of the “Web service” article. To fetch the contents for this article, the following url suffices: A request to this url will return an xml document which includes the current wiki markup for the page titled “Web service”. together with the following dependency: <!
Ubuntu Hardy Heron (Ubuntu 8.04 LTS Server Version 1.0 Author: Falko Timme <ft [at] falkotimme [dot] com> Last edited 04/24/2008 This tutorial shows how to set up an Ubuntu Hardy Heron (Ubuntu 8.04 LTS) based server that offers all services needed by ISPs and hosters: Apache web server (SSL-capable), Postfix mail server with SMTP-AUTH and TLS, BIND DNS server, Proftpd FTP server, MySQL server, Courier POP3/IMAP, Quota, Firewall, etc. This tutorial is written for the 32-bit version of Ubuntu 8.04 LTS, but should apply to the 64-bit version with very little modifications as well. I will use the following software: Web Server: Apache 2.2 with PHP 5.2.4 and Ruby Database Server: MySQL 5.0 Mail Server: Postfix DNS Server: BIND9 FTP Server: proftpd POP3/IMAP: I will use Maildir format and therefore install Courier-POP3/Courier-IMAP. In the end you should have a system that works reliably, and if you like you can install the free webhosting control panel ISPConfig (i.e., ISPConfig runs on it out of the box). 1 Requirements
Working With the "One-Second" Rule What is the "One-Second Rule?" The following condition in the Amazon Web Services license agreement often causes confusion or concern: You may make calls at any time that the Amazon Web Services are available, provided that you [...] do not exceed 1 call per second per IP address [...] Without the "one-second rule," Amazon's servers would be overwhelmed and unable to keep up with the demand on them. What, Me Worry? Often developers worry about what will happen if they occasionally make more than one query per second so they design complicated systems to prevent their programs from every making two calls less than a second apart. What Happens When You Exceed One Call Per Second? What happens when you regularly exceed the "one call per second" limit? How can I Download Everything? Many affiliate programs provide data feeds. Caching A2S Results You can cache the information so it doesn't have to be downloaded as often. Simple Cache If there is a result in the database, it looks at the timestamp.
Netflix APIs Database download Wikipedia offers free copies of all available content to interested users. These databases can be used for mirroring, personal use, informal backups, offline use or database queries (such as for Wikipedia:Maintenance). All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Where do I get... English-language Wikipedia Dumps from any Wikimedia Foundation project: Wikipedia dumps in SQL and XML: – Current revisions only, no talk or user pages. Other languages In the directory you will find the latest SQL and XML dumps for the projects, not just English. Some other directories (e.g. simple, nostalgia) exist, with the same structure. Where are the uploaded files (image, audio, video, etc., files)? Dealing with compressed files Windows Mac GNU/Linux Notes Linux FreeBSD
Open Web Design - Download Free Web Design Templates Shahzad Bhatti » Blog Archive » Working with Amazon Web Services « Merveilles du web 2.0… mon « copier bloguer » du web I started at Amazon last year, but didn’t actually got chance to work with them until recently when we had to integrate with Amazon Ecommerce Service (ECS). Amazon Web Services come in two flavors: REST and SOAP. According to inside sources about 70% use REST. Getting Access ID First, visit I will describe ECS here and it comes with 450 pages of documentation, though most of it just describes URLs and input/output fields. Other interesting links include: blog site for updates on AWS, a Forum #1, Forum #2 and FAQ. Services Inside ECS, you will find following services: ItemSearchBrowseNodeLookupCustomerContentLookupItemLookupListLookupSellerLookupSellerListingLookupSimilarityLookupTransactionLookup REST Approach The rest approach is pretty simple, in fact you can simply type in following URL to your browser (with your access key) and will see the results (in XML) right away: Finding images for Harry Potter Video: Find a book
Programmable Web Ways to process and use Wikipedia dumps – Prashanth Ellina Wikipedia is a superb resource for reference (taken with a pinch of salt of course). I spend hours at a time spidering through its pages and always come away amazed at how much information it hosts. In my opinion this ranks amongst the defining milestones of mankind’s advancement. Apart from being available through the data is provided for download so that you can create a mirror locally for quicker access. Setting up a local copy of Wikipedia Windows If you have Windows installed, Webaroo is an easy way to get Wikipedia locally as a “web pack”. Linux This page has instructions to setup on Linux. Any operating system Wikipedia provides static wiki dumps for download which should work fine on any operating system that supports a decent web browser. Windows Mobile, iPhone and Blackberry To access Wikipedia from your mobile, check out vTap from Veveo. Other uses for Wikipedia data dumps Getting the dumps Wikipedia is huge and this reflects in the data dumps.
Introduction Adapted from Explaining OAuth, published on September 05, 2007 by Eran Hammer-Lahav A Little Bit of History OAuth started around November 2006, while Blaine Cook was working on the Twitter OpenID implementation. He got in touch with Chris Messina looking for a way to use OpenID together with the Twitter API to delegate authentication. They met with David Recordon, Larry Halff, and others at a CitizenSpace OpenID meeting to discuss existing solutions. Larry was looking into integrating OpenID for Ma.gnolia Dashboard Widgets. In April 2007, a Google group was created with a small group of implementers to write a proposal for an open protocol. What is it For? Many luxury cars today come with a valet key. Every day new websites launch offering services which tie together functionality from other sites. This is the problem OAuth solves. OAuth and OpenID Who is Going to Use it? Everyone. Is OAuth a New Concept? No. Is It Ready? OAuth Core 1.0, the main protocol, was finalized in December.
The unofficial homepage of Tim Dwyer I have a new position: Senior Lecturer and Larkins Fellow at Monash University, Australia. Dissertations Tim Dwyer (2005): "Two and a Half Dimensional Visualisation of Relational Networks", PhD Thesis, The University of Sydney. (23MB pdf) Tim Dwyer (2001): "Three Dimensional UML using Force Directed Layout", Honours Thesis, The University of Melbourne (TR download) Technical Reports T. T.