花了三個多月的時間,考了兩次才通過這次考試,必須說AWS SAA-C03真的有難度,共用了一個月上線上課程、兩個月做考古題,覺得還是勤做考古題比較有效果,光是聽課就只是產生一點印象,做考古題才能知道自己有哪些細節不熟,回頭再去補強。即使準備充足,還是得要懂得應付考試的題目的眉角,這邊就整理一下自己這三個月以來的考古題筆記。
AWS功能 | 關鍵字與重點整理 |
IAM | IAM Identities(Users,Groups,Roles) |
SCP | OU,member,服務相關,不影響服務角色 |
KMS | 金鑰加密,IAM Policy |
Secret Manager | 金鑰輪替,密鑰保存 |
Congnito | 跟社交媒體,SaaS同步帳號認證 |
Congnito User Pools | 身份管理,驗證 |
Congnito Identity Pools | 授予APP對AWS服務資源存取的權限 |
Control Tower | 中央集中管理,多帳號管理 |
SQS | long transaction, process, decouple, 持續交易, 程序, 解耦, polling, 輪詢 |
SQS FIFO | 每秒300條SQS transaction,每秒3000訊息 |
SQS Long Polling | 至少有一條訊息才會回應,訊息量少故較便宜 |
SQS Short Polling | 未查詢到訊息也會回應 |
SNS | decouple, 解耦, notification,通知 |
AmazonMQ | MQ, MQTT, RabbitMQ |
Eventbridge | 事件匹配,多元事件,一整包事件服務 |
Kenesis | real-time, 即時, stream, 流, 流資料, 大數據, 分片, 大量且快速 |
Kenesis Firehose | stream+加載, S3, RedShift, Splunk |
Pinpoint | SMS,簡訊,通知 |
Multi-AZ,多可用區 | High Availability, 高可用 |
read-only replica,唯獨副本 | 流量大時分流 |
CloudFormation Stack | 單一帳戶和區域建立資源 |
CloudFormation Stackset | 多帳戶和區域部署資源 |
Lambda | Serverless,無伺服器,省成本,15分鐘,短期 |
Lambda@Edge | CloudFront的Edge Location執行,加速 |
ALB,NLB | 可擴充,擴展,scalable |
ASG | 可用性,擴充,縮減,scalable |
Global Accelerator | 加速,UDP |
Config | 追蹤和記錄 AWS 資源的配置,安全性,稽核,成本效率 |
Inspector | 偵測軟體漏洞和弱點 |
GuardDuty | 惡意程式防護(EC2, ECS, EKS), 授權行為,紀錄VPN/DNS/Cloud Trail event log |
Budgets | 控制成本,警報通知,短期 |
Cost Explorer | 視覺化和圖形化分析費用及成本,支出模式,長期 |
System Management | 自動化,遠端管理,部署 |
Beanstalk | PaaS,快速 |
Trusted Advisor | 分析成本,配置優化,安全性、性能和成本效率的建議 |
Shield Advanced | DDos,跟WAF結合,L3/4/7防護 |
WAF | SQL injection,XXS,跟 CloudFront與ALB接合,應用程式防火牆 |
Firewall Manager | 多帳號,多防火牆和多區域,較複雜的防護大禮包(WAF, Network Firewall, Shied, Route53 Firewall, Security Group ) |
Security Group | Instance,只有允許,stateful,要指定instance |
ACL | Subnet,允許和拒絕,stateless,對所有instance有效 |
NAT | 網際網路,outbound only |
IGW,Internet Gateway | 網際網路,允許雙向通訊 |
DataSync | NFS,SMB,Fsx,EFS,S3,大規模,複雜,同步 |
Transfer Family | 分成FTP和SFTP,資料傳輸到S3,EFS,EBS... |
EBS | 掛載磁區或磁碟,volume |
EFS | 共用File System,文件 |
Transfer Acceleration | 提升Transfer傳輸速度,CloudFront |
Storage Gateway: | 分以下三種 |
1.File Gateway | NFS,SMB,文件接口 |
2.Volume Gateway | iscsi volume,EBS,Stored Volume,Cached Volume |
3.Tape Gateway | Tape,磁帶,S3 Glacier |
SSE-S3 | S3託管金鑰,單一用途,免費 |
SSE-C | S3加密,用戶自己保管金鑰,收費但不貴 |
SSE-KMS | KMS加密和保管金鑰,較安全但也較貴 |
Client Side Encryption | 用戶端自己加密和保管交金鑰,免費,適合金融業 |
S3 Object lock | S3內的物件防止刪除(適合金融業) |
S3 Glacier Vault lock | S3 Glacier bucket的防止誤刪用的鎖定 |
S3 Standard | 標準,較快速 |
S3 IT(Intelligent Tier) | 資料有頻繁和不頻繁區分,讓系統自動判別 |
S3 Standard IA (Infrequently Accessed) | 資料不頻繁取用,一個月取用一次很適合 |
S3 IA One-Zone | 比上者更便宜,可接受資料遺失 |
S3 Glacier | 便宜,資料取用幾分鐘~小時 |
S3 Glacier Deep Archive | 最便宜,資料取用12~48小時 |
RDS | MySQL,Oracle,MSSQL,關聯性 |
DynomoDB | NoSQL,MongoDB,結構靈活,key value,非關聯性 |
Aurora | 全球,快速,高性能,可擴充,高可用 |
Aurora Serverless | 自動調整容量,以量計費,短時間的大量需求 |
Quantum Ledger Database | 密碼編譯,歷史紀錄 |
Elastic Redis | 支援地理和多點,支援複製和存檔snapshot,不支援多thread,複雜 |
Elastic Memcached | 不支援地理和多點,不支援複製和存檔snapshot,支援多thread,簡單 |
On-Demand Instance | 隨用即用,用多少算多少錢,較靈活,適合開發環境,優化成本考量 |
Reserved Instance | 預設範圍,穩定,可預測的工作型態,適合production環境 |
Spot Instance | 用戶競價的關係,價格最低,穩定較差,可接受中斷 |
Cluster Group Placement | HPC,高性能,快速 |
Spread Group Placement | 遇到少量需要隔離的情形,降低故障 |
Partition Group Placement | Hadoop,低故障率 |
Dedicated Host | 專用主機,完全孤立,自由配置,符合法規和license,昂貴 |
Dedicated Instance | 客製化的instance,依照instance計費,可自選Spot/On-Demanded/Reserved Instance |
Glue | ETL,爬蟲,JDBC |
AppFlow | SaaS |
EMR | Spark,Glue |
Rekognition | 人臉辨識 |
Transcribe | 語音辨識 |
Textract | OCR |
Athena | SQL Query,ODBC |
Redshift | DW,提供數據服務,資料儲存與SQL Query功能 |
Redshift Spectrum | 可用SQL Query S3 |
QuickSight | BI,不提供數據服務,資料可視化與分析 |
Geoproximity | client和AWS之間的距離 |
Geolocation | 直接將流量路由到特定的地理區域 |
CloudWatch | logs,日誌 |
CloudTrail | API追蹤 |
Restful API | stateless |
WebSocket API | stateful |
Neptune | 圖形化,serverless,社交關連 |
SageMaker | ML,Machine Learning |
考古題我是做了Udemy和examtopics的題庫,後者的精準度滿高的,不過要能閱讀所有題庫要噴不少錢錢,所以我大部分的時間都是做Udemy,都是反覆做好幾次來增加記憶,第一次考沒過的時候我只做了兩遍,第二次我就連續做了四、五遍,看不懂的題目就背下來,有些題目重複率滿高的,記得有些題目連續兩次都有考出來,像是:
A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
- A. Enable versioning on the S3 bucket.
- B. Enable MFA Delete on the S3 bucket.
- C. Create a bucket policy on the S3 bucket.
- D. Enable default encryption on the S3 bucket.
- E. Create a lifecycle policy for the objects in the S3 bucket.
答案:AB
A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate the data to an AWS managed service for development and maintenance of the application data. The solution must require minimal operational support and provide immutable, cryptographically verifiable logs of data changes.
Which solution will meet these requirements MOST cost-effectively?
- A. Copy the records from the application into an Amazon Redshift cluster.
- B. Copy the records from the application into an Amazon Neptune cluster.
- C. Copy the records from the application into an Amazon Timestream database.
- D. Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.
答案:D
還有一題是考前一晚看考古題才知道的,若沒有做題目的話這題我大概也不會解:
A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?
- A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
- B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
- C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule's target. Create a second EventBridge (Cloud Watch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.
- D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
答案:B
沒有留言:
張貼留言