CDN - CloudFront
S3,
Glacier - store archive, not frequently used data
Amazon SimpleDB
- smaller dataeset, NO SOL, KV
Amazon RDS
- relation db, built on mysql
DynamoDB
CloudFront - CDN
http://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html#healthcheck
Had to install gcc and install mtools as root.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compile-software.html
You can also use yum install to install RPM package files that you have downloaded from the Internet.
http://cloudacademy.com/blog/aws-cli-a-beginners-guide/
aws s3 ls
~/
aws iam list-users --output table
List all your EC2 tags:
aws ec2 describe-tags --output table
aws ec2 describe-tags --output table
Play around with outputs, help, or whatever
aws ec2 describe-spot-price-history help
aws ec2 describe-spot-price-history help
aws ec2 describe-instances
aws ec2 describe-instances
aws s3 ls s3://mybucket --recursive --human-readable --summarize
aws s3 ls help
http://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html
The
http://stackoverflow.com/questions/31942341/selective-file-download-in-aws-s3-cli
https://aws.amazon.com/blogs/developer/amazon-s3-transfermanager/
Depending on the size and data source for your upload, TransferManager adjusts the algorithm it uses to process your transfer, in order to get the best performance and reliability. Whenever possible, uploads are broken up into multiple pieces, so that several pieces can be sent in parallel to provide better throughput. In addition to higher throughput, this approach also enables more robust transfers, since an I/O error in any individual piece means the SDK only needs to retransmit the one affected piece, and not the entire transfer.
TransferManager includes several more advanced features, such as recursively downloading entire sections of S3 buckets, or the ability to clean up pieces of failed multipart uploads. One of the more commonly used options is the ability to attach a progress listener to your uploads and downloads, which can run custom code at different points in the transfer’s lifecycle.
// You can set a progress listener directly on a transfer, or you can pass one into // the upload object to have it attached to the transfer as soon as it starts upload.setProgressListener(new ProgressListener() { // This method is called periodically as your transfer progresses public void progressChanged(ProgressEvent progressEvent) { System.out.println(upload.getProgress().getPercentTransferred() + "%"); if (progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) { System.out.println("Upload complete!!!"); } } };
upload.waitForCompletion();
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html
S3,
Glacier - store archive, not frequently used data
Amazon SimpleDB
- smaller dataeset, NO SOL, KV
Amazon RDS
- relation db, built on mysql
DynamoDB
CloudFront - CDN
http://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html#healthcheck
A load balancer accepts incoming traffic from clients and routes requests to its registered EC2 instances in one or more Availability Zones. The load balancer also monitors the health of its registered instances and ensures that it routes traffic only to healthy instances. When the load balancer detects an unhealthy instance, it stops routing traffic to that instance, and then resumes routing traffic to that instance when it detects that the instance is healthy again.
https://superuser.com/questions/338296/how-to-use-yum-to-reinstall-all-dependencies-of-a-given-packagerpm -qa | xargs yum reinstall
yum reinstall $(yum list installed | awk '{print $1}')
yum list installedHad to install gcc and install mtools as root.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compile-software.html
Because software compilation is not a task that every Amazon EC2 instance requires, these tools are not installed by default, but they are available in a package group called "Development Tools" that is easily added to an instance with the yum groupinstall command.
yum groupinstall "Development Tools"
sudo yum clean all
sudo yum update
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-software.htmlYou can also use yum install to install RPM package files that you have downloaded from the Internet.
yum install my-package
.rpm
http://cloudacademy.com/blog/aws-cli-a-beginners-guide/
aws s3 ls
~/
.aws/config
aws iam list-users --output table
List all your EC2 tags:
aws ec2 describe-tags --output table
aws ec2 describe-tags --output table
Play around with outputs, help, or whatever
aws ec2 describe-spot-price-history help
aws ec2 describe-spot-price-history help
aws ec2 describe-instances
aws ec2 describe-instances
$ aws help
The following command lists the available subcommands for Amazon EC2.
$ aws ec2 help
The next example lists the detailed help for the EC2
DescribeInstances
operation, including descriptions of its input parameters, filters, and output. Check the examples section of the help if you are not sure how to phrase a command.$ aws ec2 describe-instances help
aws s3 ls s3://mybucket
aws s3 ls s3://mybucket --recursive
aws s3 ls help
http://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html
The
sync
command synchronizes the contents of a bucket and a directory, or two buckets.http://stackoverflow.com/questions/31942341/selective-file-download-in-aws-s3-cli
This command will copy all files starting with
2015-08-15
:aws s3 cp s3://BUCKET/ folder --exclude "*" --include "2015-08-15*" --recursive
If your goal is to synchronize a set of files without copying them twice, use the
sync
command:aws s3 sync s3://BUCKET/ folder
That will copy all files that have been added or modified since the previous sync.
In fact, this is the equivalent of the above
cp
command:aws s3 sync s3://BUCKET/ folder --exclude "*" --include "2015-08-15*"
https://aws.amazon.com/blogs/developer/amazon-s3-transfermanager/
TransferManager provides asynchronous management for uploads and downloads between your application and Amazon S3. You can easily check on the status of your transfers, add handlers to run code when a transfer completes, cancel transfers, and more.
TransferManager tx = new TransferManager(credentials);
// The upload and download methods return immediately, while
// TransferManager processes the transfer in the background thread pool
Upload upload = tx.upload(bucketName, myFile.getName(), myFile);
Depending on the size and data source for your upload, TransferManager adjusts the algorithm it uses to process your transfer, in order to get the best performance and reliability. Whenever possible, uploads are broken up into multiple pieces, so that several pieces can be sent in parallel to provide better throughput. In addition to higher throughput, this approach also enables more robust transfers, since an I/O error in any individual piece means the SDK only needs to retransmit the one affected piece, and not the entire transfer.
TransferManager includes several more advanced features, such as recursively downloading entire sections of S3 buckets, or the ability to clean up pieces of failed multipart uploads. One of the more commonly used options is the ability to attach a progress listener to your uploads and downloads, which can run custom code at different points in the transfer’s lifecycle.
// You can set a progress listener directly on a transfer, or you can pass one into // the upload object to have it attached to the transfer as soon as it starts upload.setProgressListener(new ProgressListener() { // This method is called periodically as your transfer progresses public void progressChanged(ProgressEvent progressEvent) { System.out.println(upload.getProgress().getPercentTransferred() + "%"); if (progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) { System.out.println("Upload complete!!!"); } } };
upload.waitForCompletion();
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html
makes extensive use of Amazon S3 multipart uploads to achieve enhanced throughput, performance and reliability.
When possible,
TransferManager
attempts to use multiple threads to upload multiple parts of a single upload at onceTransferManager
is responsible for managing resources such as connections and threads; share a single instance of TransferManager
whenever possible. TransferManager
, like all the client classes in the AWS SDK for Java, is thread safe. Call TransferManager.shutdownNow()
to release the resources once the transfer is complete.
Transfers can be paused and resumed at a later time. It can also survive JVM crash, provided the information that is required to resume the transfer is given as input to the resume operation.
Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region
https://aws.amazon.com/cn/message/41926/
http://coolshell.cn/articles/17737.html
Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region
https://aws.amazon.com/cn/message/41926/
http://coolshell.cn/articles/17737.html