background preloader

How-2s

Facebook Twitter

Tutorials and recipes on building various services within AWS

Exploring Automated Adjudication Workflow in Amazon Mechanical Turk Java-Based Code Sample : Sample Code & Libraries. About This Code Sample This code sample creates an "Adjudication Engine" which allows a Requester to use plurality to automatically approve or reject work. Specifically, this code sample will publish a HIT with 2 Assignments and compare the Worker results. If the results don't match the Adjudication Engine publishes a 3rd Assignment and compares the 3 results to determine which results to accept and which to rejectRequesters using this code sample may want to change the number of Assignments compared, which answer(s) are compared or which Qualifications a Worker must possess to complete the AssignmentsThis code sample includes sample HIT contents for testing.

Requesters should modify the sample HIT contentsThis code sample uses the Amazon Mechanical Turk Java SDK found here: Features and Benefits Prerequisites Note: These instructions assume a *nix system. You will need an Amazon Web Services account. AWS Flow Framework Recipes : Sample Code & Libraries. Amazon Simple Workflow Service (SWF) helps developers automatically coordinate work in applications for better scalability and performance. To execute work scalably and with low latency, developers typically spawn multiple processes that can execute across multiple machines. They must then write custom code to distribute work, manage execution state, define how the application reacts to failed or delayed processes, and manage work streams that can complete at different times.

This custom work-coordination code adds complexity for developers who want to focus on implementing what an application does, not how the application's work gets done. To help developers automate work coordination in their applications, SWF provides a programming model for developers to express work coordination business logic, as well as a service to manage the coordination of work itself. Repeatedly Execute an Activity Execute Multiple Activities Concurrently Execute Workflow Logic Conditionally Signal a Workflow. Sample Code & Libraries. Simply unzip the zip file into a directory and use the individual scripts such as ses-send-email.pl and ses-get-stats.pl.

The archive comes with a more extensive README file that explains how to use these tools in more detail. Having problems or questions? Please post to the Amazon SES Forum where we will be happy to help. NOTE: Amazon SES no longer maintains these scripts. Please see the note under About these scripts below. Last Modified: Sep 29, 2015 1:31 AM GMT Curl is a popular command-line tool for interacting with HTTP services. Last Modified: Aug 20, 2015 16:48 PM GMT This version of the EB CLI has been deprecated and a newer version is available.

Last Modified: Jul 1, 2015 23:32 PM GMT An internet advertising company operates a data warehouse using Hive and Amazon Elastic MapReduce. Last Modified: Apr 29, 2015 17:47 PM GMT The Amazon CloudWatch Monitoring Scripts for Linux are sample scripts for monitoring memory and disk space utilization on your Amazon EC2 instances running Linux. Importing, Exporting, and Upgrading Metadata. This section describes how to use the Metadata Loader for change and version management. The Metadata Loader copies and moves all types of metadata objects in a repository. With this utility, you can move metadata between Oracle Warehouse Builder repositories that reside on platforms with different operating systems. You can use the Design Center to run the Metadata Loader utilities. The Design Center provides a graphical interface that guides you through the process of exporting and importing metadata. This section contains the following topics: Exporting Metadata from the Design Center You can use Design Center to export objects from a workspace into an MDL file.

Before you attempt to export metadata, ensure you have READ privileges on any object to export. By default, READ privileges are provided on all the workspace objects to all registered users. You have two options for exporting metadata. Before starting the export operation, take care of the following: Importing Objects Note: Create and Use IAM Roles for Amazon EMR - Amazon Elastic MapReduce. Create and Use IAM Roles with the Amazon EMR Console AWS customers whose accounts were created after release of Amazon EMR roles are required to specify an EMR (service) role and an EC2 instance profile in all regions when using the console. You can create default roles at cluster launch using the console, or you can specify other roles you may already be using.

If you are using an IAM user and creating default roles for a cluster using the console, your IAM user must have the iam:CreateRole, iam:PutRolePolicy, iam:CreateInstanceProfile, iam:AddRoleToInstanceProfile, and iam:PassRole permissions. The iam:PassRole permission allows cluster creation. The remaining permissions allow creation of the default roles. To create and use IAM roles with the console Create and Use IAM Roles with the AWS CLI You can create the default Amazon EMR (service) role and EC2 instance profile using the CLI.

To create and use IAM roles with the AWS CLI. Untitled. Storage Considerations EC2 instances can be configured with either ephemeral storage or persistent storage using the Elastic Block Store (EBS). Ephemeral storage is lost when instances are terminated, so it is generally not recommended unless you understand the data loss implications. For almost all deployments EBS will be the better choice. For production systems we recommend using EBS-optimized EC2 instancesProvisioned IOPS (PIOPS) EBS volumes Storage configuration needs vary among deployments, but for best performance we recommend separate volumes for data files, the journal, and the log. Note Using different storage devices will affect your ability to create snapshot-style backups of your data, since the files will be on different devices and volumes.

Using RAID levels such as RAID0, RAID1, or RAID10 can provide volume-level redundancy or capacity. Manually Deploy MongoDB on EC2 The following steps can be used to deploy MongoDB on EC2 yourself. . $ ec2-describe-instances [INSTANCE-ID] Then: Installing Jasper Server In AWS EC2 / LINUX/UBUNTU. Things we need to find out before installing Jasperserver in AWS EC2 / Linux. 1) Identifying whether your CPU is a 64bit or not.

Use below command to find it uname –m Our CPU is a 64 bit. 2) Identify whether Jasperserver default port 8080 is open or not? Below command helps you to identify it. netstat -an | grep 8080| grep LISTEN If nothing has not returned, then your port 8080 is free to use. Installing Jasperserver 3) Created a folder Helical and Jasperserver under home. /home/Helical/Jasperserver. File name: Jasperreports-server-5.5-linux-x64-installer.run Command used to download: Community server 5.5 CP If you face an Error saying “ -bash: wget: command not found.” Yum install wget Repeat wget command again. 5) Downloaded “jasperreports-server-5.5-linux-x64-installer.run” under the location “/home/Helical/Jasperserver”. 6) Apply chmod 777 for the downloaded file It was confirmed that port 8080 is not in use with any other process.

. “./ jasperreports-server-5.5-linux-x64-installer.run” Cloud Computing RubyGems by RightScale. Sample Code & Libraries. Sample Code & Libraries. Enabling SAML 2.0 Federated Users to Access the AWS Management Console - AWS Identity and Access Management. You can use a role to configure your SAML 2.0-compliant IdP and AWS to permit your federated users to access the AWS Management Console. The role grants the user permissions to carry out tasks in the console. If instead you want to give SAML federated users other ways to access AWS, see one of these topics: The following diagram illustrates the flow for SAML-enabled single sign-on. Note This specific use of SAML differs from the more general one illustrated at About SAML 2.0-based Federation because this workflow opens the AWS Management Console on behalf of the user.

The diagram illustrates the following steps: The user browses to your organization's portal and selects the option to go to the AWS Management Console. From the user's perspective, the process happens transparently: The user starts at your organization's internal portal and ends up at the AWS Management Console, without ever having to supply any AWS credentials. Configure your network as a SAML provider for AWS. Amazon S3 bucket listing : Sample Code & Libraries : Amazon Web Services.

Sample Code & Libraries : Amazon Web Services. Showing 1-15 of 15 results. Sort by: Simple and extendable Node.js library to access Amazon Web Services - currently implemented are specfic clients for EC2 and the Product Advertising API. Last Modified: Oct 30, 2013 23:47 PM GMT Password management in your S3 bucket using a web browser. AwsPass is a collection of javascript code that retrieves, decrypts, displays, updates, encrypts and uploads web or non web passwords. Last Modified: Feb 18, 2011 1:59 AM GMT Node.js Client library for Amazon Product Advertising API, including support for request signatures.

Last Modified: Sep 13, 2010 23:19 PM GMT A simple HTML and Javascript application that allows you to explore the Amazon SimpleDB API without writing any code. Last Modified: Feb 25, 2010 18:38 PM GMT Javascript Scratchpad for Amazon EC2 Last Modified: Sep 24, 2009 6:08 AM GMT A JavaScript-based tool to help you debug issues with signed requests. Last Modified: Jul 8, 2009 13:58 PM GMT Last Modified: May 21, 2009 19:54 PM GMT Results per page: S3 Deployment - Travis CI. Browser-Side Amazon S3 Uploads, Using CORS and JavaScript. Amazon S3 direct file upload from client browser - private key disclosure. Mikeaddison93/aws-sdk-js. Building a file explorer on top of Amazon S3. Amazon S3 is a simple file storage solution that is great for storing content, but how well does it stack up when used as the storage mechanism for a web-based file explorer?

Recently I was tasked with doing just this for a client. Furthermore, as opposed to the existing solution (which used CKFinder and synchronised copies of the files between our own server and the bucket), I needed to connect to an S3 bucket directly. In this post I’ll talk about how we did it. Don’t reinvent the wheel….. or maybe we should! Initially it seemed like the best thing to do was to use an existing implementation of a web based file explorer that connects to S3.

I assumed that there must be a good solution that someone had taken the time to implement. So I had a look online and found 2 possible contenders, CKFinder with an S3 plugin and ELFinder with an S3 plugin. I decided to first try ELFinder to see how it would perform. So why was it so slow? S3 is not the same as a traditional file system. Search.

Building the SDK for the Browser — AWS SDK for JavaScript. This section explains how you can create your own build of the AWS SDK for JavaScript. If you are working with the SDK outside of an environment that enforces CORS in your browser and want access to the full gamut of services provided by the AWS SDK for JavaScript, it is possible to build a custom copy of the SDK locally by cloning the repository and running the same build tools used to generate the default hosted version of the SDK. This chapter outlines the steps to build the SDK on your own with extra services and API versions. Using the SDK Builder The easiest way to create your own build of the AWS SDK for JavaScript is to use the SDK builder hosted at Using Command Line Tools In order to build the SDK using command line tools, you first need to clone the Git repository containing the SDK source.

These instructions assume you have Git and a version of Node.js installed on your machine. npm install Building This will build to the file aws-sdk.js. Building an Amazon S3 Browser with Grails : Articles & Tutorials : Amazon Web Services. By Don Denoncourt, AWS Community Developer In this article, you develop a Grails-based Web application with which you can browse, add, delete, upload, and download your Amazon Simple Storage Service (Amazon S3) objects. Along the way, you get an introduction to Grails and the Amazon Web Services (AWS) software development kit (SDK) for Java. In 20 or 30 minutes, you'll have the browser for Amazon S3 running on your system: Generating the Browser for Amazon S3 with Grails Grails is an amazing framework that provides rapid application development (RAD) for Web applications.

You can download the latest Grails release from Grails - Downloads, then extract the .zip file to a local directory (I typically use /opt/grails on UNIX and C:\opt\grails on Windows). With Grails and Java installed, you set up set up GRAILS_HOME and JAVA_HOME environment variables so that they point to their prospective installation directory. Generate the Infrastructure grails create-app s3browser and BucketObject.groovy: How to create username and password for AWS Management Console. Working with AWS Login Profiles. Recently, AWS IAM Team announced support for Login Profiles - an easy and convenient way to create username/password pairs which can be used to sign-in and use AWS Management Console and AWS Developer Forums. With S3 Browser Freeware you can easily create and edit Login Profiles. To create new username/password for AWS Console: 1. Click Tools -> Access Manager (IAM) Click Tools -> Access Manager (IAM) 2.

Right-click on the user and choose AWS Login Profile -> Set new password.. Choose AWS Login Profile -> Set new password from user's context menu. You may also use Ctrl + Shift + E keyboard shortcut AWS Login Profile dialog will open: Enter new password and click Save changes to save new Login Profile password 3. S3 Browser generates such link automatically. Basic HTTP Auth for S3 Buckets. Amazon S3 is a simple and very useful storage of binary objects (aka "files"). To use it, you create a "bucket" there with a unique name and upload your objects.

Afterwards, AWS guarantees your object will be available for download through their RESTful API. A few years ago, AWS introduced a S3 feature called static website hosting. With static website hosting, you simply turn on the feature and all objects in your bucket become available through public HTTP. This is an awesome feature for hosting static content, such as images, JavaScript files, video and audio content. When using the hosting, you need to change the CNAME record in your DNS so that it points to www.example.com.aws.amazon.com.

After changing the DNS entry, your static website is available at www.example.com just as it would be normally. When using Amazon S3, though, it is not possible to protect your website because the content is purely static. My use case with the service was a bit more complex, though. s3auth.com Traction. Using CloudFront with Amazon S3 - Amazon CloudFront. If you're using Amazon S3 CNAMEs, your application uses your domain name (for example, example.com) to reference the objects in your Amazon S3 bucket instead of using the name of your bucket (for example, myawsbucket.s3.amazonaws.com). To continue using your domain name to reference objects instead of using the CloudFront domain name for your distribution (for example, d111111abcdef8.cloudfront.net), you need to update your settings with your DNS service provider.For Amazon S3 CNAMEs to work, your DNS service provider has a CNAME resource record set for your domain that currently routes end-user queries for the domain to your Amazon S3 bucket.

AmazonRDSMigrationToolUserGuide.