background preloader

AWS offer

Facebook Twitter

HPC on AWS. Spot Market. Today we are coupling two popular aspects of Amazon EC2: Cluster computing and Spot Instances!

Spot Market

More and more of our customers are finding innovative ways to use EC2 Spot Instances to save up to two-thirds off the On-Demand price. Batch processing, media rendering and transcoding, grid computing, testing, web crawling, and Hadoop-based processing are just a handful of the use cases that are running on Spot today. Auto Scaling. Simple Storage Service (S3) Documentation. Scalable Web Architectures (w/ Ruby and Amazon S3) Amazon S3. At its inception, Amazon charged end users US$0.15 per gigabyte-month, with additional charges for bandwidth used in sending and receiving data, and a per-request (get or put) charge.[4] On November 1, 2008, pricing moved to tiers where end users storing more than 50 terabytes receive discounted pricing.[5] Amazon says that S3 uses the same scalable storage infrastructure that Amazon.com uses to run its own global e-commerce network.[6] Amazon S3 is reported to store more than 2 trillion objects as of April 2013[update].[7] This is up from 102 billion objects as of March 2010[update],[8] 64 billion objects in August 2009,[9] 52 billion in March 2009,[10] 29 billion in October 2008,[5] 14 billion in January 2008, and 10 billion in October 2007.[11] S3 uses include web hosting, image hosting, and storage for backup systems.

Amazon S3

S3 guarantees 99.9% monthly uptime,[12] i.e. not more than 43 minutes of downtime per month.[13] Design[edit] Hosting entire websites[edit] Notable uses[edit] s3fs. Copy Proposal. Summary When you want to create a copy of an object in Amazon S3, today you must re-upload your existing object to the new name.

Copy Proposal

If you do not have a copy of the object, you must first download the object and then re-uploaded to Amazon S3, incurring data transfer charges for both the download and the upload as well as a GET and PUT request charge. By using copy, these operations are combined into a single operation which will save time and money.

Copy with DevPay. The Amazon Simple Storage Service lets you copy an object.

Copy with DevPay

For more information about the Amazon S3 copy feature, go to the section about copying objects in the Amazon Simple Storage Service Developer Guide. To understand how DevPay products can use the copy feature, think of the copy operation as two steps: the DevPay product reads the object from the source bucket and then writes the object to the destination bucket. CloudFront. Download this AWS-sponsored Frost & Sullivan white paper to learn why amazon.com chose Amazon CloudFront to deliver the vast majority of its global CDN traffic.

CloudFront

Download the Amazon CloudFront CDN paper . Amazon CloudFront is a content delivery web service. It integrates with other Amazon Web Services to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no commitments. CDN Speed Test. CloudFront Performance Tips. Amazon CloudFront is a content delivery network.

CloudFront Performance Tips

You can place files on Amazon S3 and CloudFront. And then point static URLs to serve from CloudFront in your website HTML. Invalidation. This action creates a new invalidation batch request.

Invalidation

For more information about invalidation, go to Invalidating Objects in the Amazon CloudFront Developer Guide. Important You can invalidate most types of objects that are served by a web distribution, but you cannot invalidate media files in the Microsoft Smooth Streaming format when you have enabled Smooth Streaming for the corresponding cache behavior. In addition, you cannot invalidate objects that are served by an RTMP distribution. To create an invalidation batch request, you do a POST on the 2014-01-31/distribution/distribution ID/invalidation resource. Querystring Invalidation. The links you create to your objects can be one of the two types listed in the following table.

Querystring Invalidation

A private content distribution is one that serves content that is not publicly readable. You can configure a private content distribution to use either basic URLs or signed URLs, but not both. For more information, see Using a Signed URL to Serve Private Content. When you create a distribution, you receive the CloudFront domain name associated with that distribution. Survey. Custom Origins Limitations. Global Infrastructure. Amazon S3 Performance Tips & Tricks. Today's guest post is brought to you by Doug Grismore, Director of Storage Operations for AWS.

Amazon S3 Performance Tips & Tricks

Doug has some useful performance tips and tricks that will help you to get the best possible performance from Amazon S3. There's also information about a special S3 hiring event that will take place later this week in Seattle. Update (September 2013) - The information below, while still largely accurate, has been supplanted by a newer document, S3 Request Rate and Performance Considerations. -- Jeff; We've worked with a large number of customers over the last few years getting some truly massive workloads into and out of Amazon S3.

First: for smaller workloads (<50 total requests per second), none of the below applies, no matter how many total objects one has! S3 scales to both short-term and long-term workloads far, far greater than this. Some high-level design concepts are necessary here to explain why the approach below works. Bucketname/keyname Further, keys in S3 are partitioned by prefix. Direct Connect. AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS.

Direct Connect

Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. DynamoDB. CTO Article. Today is a very exciting day as we release Amazon DynamoDB, a fast, highly reliable and cost-effective NoSQL database service designed for internet scale applications. DynamoDB is the result of 15 years of learning in the areas of large scale non-relational databases and cloud services.

Several years ago we published a paper on the details of Amazon’s Dynamo technology, which was one of the first non-relational databases developed at Amazon. The original Dynamo design was based on a core set of strong distributed systems principles resulting in an ultra-scalable and highly reliable database system. Amazon DynamoDB, which is a new service, continues to build on these principles, and also builds on our years of experience with running non-relational databases and cloud services, such as Amazon SimpleDB and Amazon S3, at scale.

Relational Database Service (RDS) Amazon RDS gives you online access to the capabilities of a MySQL, Oracle, Microsoft SQL Server, PostgreSQL, or Amazon Aurora relational database management system. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery. You benefit from the flexibility of being able to scale the compute resources or storage capacity associated with your Database Instance (DB Instance) via a single API call. Database Instances using Amazon RDS's MySQL, Oracle, SQL Server, and PostgreSQL engines can be provisioned with General Purpose (SSD) Storage, Provisioned IOPS (SSD) Storage, or Magnetic Storage. DB Parameter Group Deployment. Changing MySQL DB parameter on RDS.

Posted by Madhu Donepudi on November 16, 2010. RDS Monitoring.

MySQL on AWS

Simple Workflow Service (SWF) A Dream for Process-Driven Automators. Amazon’s latest addition to its web services suite, the Amazon SWF API, is meant to connect many of its other services together. At first a bit generic to wrap your brain around. SWF, or Simple Workflow, can be connected to any services, including self-hosted systems and other cloud providers. Amazon has created a framework to step through complex business and development processes. The announcement post mentions that an order placed on Amazon.com has at least 40 different steps. It also shows an example of a less complex process, uploading a photo, that still takes 13 steps: 1.

Elastic MapReduce. Amazon Elastic MapReduce. Elastic Compute Cloud (EC2) Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Linux AMI Guide. You Should Use EBS. Creating AMI from a running instance. Ubuntu EC2StartersGuide. AWS Elastic Beanstalk. Simple Email Service (SES) Personal Cloud Computing + Netbooks = Mobile Supercomputing? EC2 for Poets. Amazon Virtual Private Cloud.