This page was exported from Exams Labs Braindumps [ http://blog.examslabs.com ] Export date:Sat Nov 23 12:01:44 2024 / +0000 GMT ___________________________________________________ Title: [Sep 26, 2022] Prepare For The Professional-Cloud-Architect Question Papers In Advance [Q119-Q142] --------------------------------------------------- [Sep 26, 2022] Prepare For The Professional-Cloud-Architect Question Papers In Advance Professional-Cloud-Architect PDF Dumps Real 2022 Recently Updated Questions Q119. Your customer is moving an existing corporate application to Google Cloud Platform from an on- premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords.What authentication strategy should they use?  Use G Suite Password Sync to replicate passwords into Google.  Federate authentication via SAML 2.0 to the existing Identity Provider.  Provision users in Google using the Google Cloud Directory Sync tool.  Ask users to set their Google password to match their corporate password. Provision users to Google’s directoryThe global Directory is available to both Cloud Platform and G Suite resources and can be provisioned by a number of means. Provisioned users can take advantage of rich authentication features including single sign-on (SSO), OAuth, and two-factor verification.You can provision users automatically using one of the following tools and services:Google Cloud Directory Sync (GCDS)Google Admin SDKA third-party connectorGCDS is a connector that can provision users and groups on your behalf for both Cloud Platform and G Suite. Using GCDS, you can automate the addition, modification, and deletion of users, groups, and non-employee contacts. You can synchronize the data from your LDAP directory server to your Cloud Platform domain by using LDAP queries. This synchronization is one-way:the data in your LDAP directory server is never modified.References: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#authentication-and-identityQ120. You have been asked to select the storage system for the click-data of your company’s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute, with bursts of up to 8,500 clicks per second. It must been stored for future analysis by your data science and user experience teams. Which storage infrastructure should you choose?  Google Cloud SQL  Google Cloud Bigtable  Google Cloud Storage  Google cloud Datastore Explanationhttps://cloud.google.com/solutions/data-analytics-partner-ecosystemhttps://zulily-tech.com/2015/08/10/leveraging-google-cloud-dataflow-for-clickstream-processing/ Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.Good for:* Low-latency read/write access* High-throughput analytics* Native time series support* Common workloads:* IoT, finance, adtech* Personalization, recommendations* Monitoring* Geospatial datasets* GraphsTopic 5, Dress4Win Case 2Company OverviewDress4win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has grown from a few servers in the founder’s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now insufficient for the application’s rapid growth. Because of this growth and the company’s desire to innovate faster. Dress4Win is committing to a full migration to a public cloud.Solution ConceptFor the first phase of their migration to the cloud, Dress4win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.Existing Technical EnvironmentThe Dress4win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.Databases:MySQL. 1 server for user data, inventory, static data:– MySQL 5.8– 8 core CPUs– 128 GB of RAM– 2x 5 TB HDD (RAID 1)Redis 3 server cluster for metadata, social graph, caching. Each server is:– Redis 3.2– 4 core CPUs– 32GB of RAMCompute:40 Web Application servers providing micro-services based APIs and static content.– Tomcat – Java– Nginx– 4 core CPUs– 32 GB of RAM20 Apache Hadoop/Spark servers:– Data analysis– Real-time trending calculations– 8 core CPUS– 128 GB of RAM– 4x 5 TB HDD (RAID 1)3 RabbitMQ servers for messaging, social notifications, and events:– 8 core CPUs– 32GB of RAMMiscellaneous servers:– Jenkins, monitoring, bastion hosts, security scanners– 8 core CPUs– 32GB of RAMStorage appliances:iSCSI for VM hostsFiber channel SAN – MySQL databases– 1 PB total storage; 400 TB availableNAS – image storage, logs, backups– 100 TB total storage; 35 TB availableBusiness RequirementsBuild a reliable and reproducible environment with scaled parity of production.Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.Improve business agility and speed of innovation through rapid provisioning of new resources.Analyze and optimize architecture for performance in the cloud.Technical RequirementsEasily create non-production environment in the cloud.Implement an automation framework for provisioning resources in cloud.Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.Support failover of the production environment to cloud during an emergency.Encrypt data on the wire and at rest.Support multiple private connections between the production data center and cloud environment.Executive StatementOur investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.Q121. For this question refer to the TerramEarth case study.Which of TerramEarth’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption.  Opex/capex allocation, LAN changes, capacity planning  Capacity planning, TCO calculations, opex/capex allocation  Capacity planning, utilization measurement, data center expansion  Data Center expansion, TCO calculations, utilization measurement Q122. Case Study: 3 – JencoMart Case StudyCompany OverviewJencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company’s core values is excellent customer service. In addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.Company BackgroundJencoMart started as a general store in 1931, and has grown into one of the world’s leading brands known for great value and customer service. Over time, the company transitioned from only physical stores to a stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.Solution ConceptJencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently host all of these applications on infrastructure that is at its end of life and is no longer supported.Existing Technical EnvironmentJencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe, most applications are dual-homed.JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.Application Customer loyalty portalLAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S.data centers.Database* Oracle Database stores user profiles* PostgreSQL database stores user credentials-homed in US WestAuthenticates all usersCompute* 30 machines in US West Coast, each machine has:* 20 machines in US East Coast, each machine has:-core CPURAID 1)Storage* Access to shared 100 TB SAN in each location* Tape backup every weekBusiness Requirements* Optimize for capacity during peak periods and value during off-peak periods* Guarantee service availably and support* Reduce on-premises footprint and associated financial and environmental impact.* Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase* Expand services into Asia.Technical Requirements* Assess key application for cloud suitability.* Modify application for the cloud.* Move applications to a new infrastructure.* Leverage managed services wherever feasible* Sunset 20% of capacity in existing data centers* Decrease latency in AsiaCEO StatementJencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store experiences. As a large global company, we also have a responsibility to the environment through ‘green’ initiatives and polices.CTO StatementThe challenges of operating data centers prevents focus on key technologies critical to our long- term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine learning to improve our service customers.CFO StatementSince its founding JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long- term success. This model will allow us to respond to increasing customer demand during peak and reduce costs.For this question, refer to the JencoMart case study.The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram.You want to maximize throughput.What are three potential bottlenecks? (Choose 3 answers.)  A single VPN tunnel, which limits throughput  A tier of Google Cloud Storage that is not suited for this task  A copy command that is not suited to operate over long distances  Fewer virtual machines (VMs) in GCP than on-premises machines  A separate storage layer outside the VMs, which is not suited for this task  Complicated internet connectivity between the on-premises infrastructure and GCP Q123. You are creating an App Engine application that uses Cloud Datastore as its persistence layer.You need to retrieve several root entities for which you have the identifiers. You want to minimize the overhead in operations performed by Cloud Datastore. What should you do?  Create the Key object for each Entity and run a batch get operation  Create the Key object for each Entity and run multiple get operations, one operation for each entity  Use the identifiers to create a query filter and run a batch query operation  Use the identifiers to create a query filter and run multiple query operations, one operation for each entity Q124. A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed.What is the most likely cause of this problem?  The session variable is local to just a single instance  The session variable is being overwritten in Cloud Datastore  The URL of the API needs to be modified to prevent caching  The HTTP Expires header needs to be set to -1 stop caching Q125. For this question refer to the TerramEarth case studyOperational parameters such as oil pressure are adjustable on each of TerramEarth’s vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field How can you accomplish this goal?  Have your engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically.  Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.  Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically.  Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically. ExplanationReferences: https://cloud.google.com/customers/ocado/Q126. Your marketing department wants to send out a promotional email campaign. The development team wants to minimize direct operation management. They project a wide range of possible customer responses, from 100 to 500,000 click-throughs per day. The link leads to a simple website that explains the promotion and collects user information and preferences. Which infrastructure should you recommend? (CHOOSE TWO)  Use Google App Engine to serve the website and Google Cloud Datastore to store user data.  Use a Google Container Engine cluster to serve the website and store data to persistent disk.  Use a managed instance group to serve the website and Google Cloud Bigtable to store user data.  Use a single compute Engine virtual machine (VM) to host a web server, backed by Google Cloud SQL. Reference: https://cloud.google.com/storage-options/References: https://cloud.google.com/storage-options/Q127. Case Study: 7 – Mountkirk GamesCompany OverviewMountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools.Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.Solution ConceptMountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game’s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.Business RequirementsIncrease to a global footprint.* Improve uptime – downtime is loss of players.* Increase efficiency of the cloud resources we use.* Reduce latency to all customers.* Technical RequirementsRequirements for Game Backend PlatformDynamically scale up or down based on game activity.* Connect to a transactional database service to manage user profiles and game state.* Store game activity in a timeseries database service for future analysis.* As the system scales, ensure that data is not lost due to processing backlogs.* Run hardened Linux distro.* Requirements for Game Analytics PlatformDynamically scale up or down based on game activity* Process incoming data on the fly directly from the game servers* Process data that arrives late because of slow mobile networks* Allow queries to access at least 10 TB of historical data* Process files that are regularly uploaded by users’ mobile devices* Executive StatementOur last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game’s reputation. Our investors want more key performance indicators (KPIs) to evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users.Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games.Considering the Mountkirk Games business and technical requirements, what should you do?  Create network load balancers. Use preemptible Compute Engine instances.  Create network load balancers. Use non-preemptible Compute Engine instances.  Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.  Create a global load balancer with managed instance groups and autoscaling policies. Use non- preemptible Compute Engine instances. Q128. Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables. You want analysts from each country to be able to see and query only the data for their respective countries.How should you configure the access rights?  Create a group per country. Add analysts to their respective country-groups. Create a single group‘all_analysts’, and add all country-groups as members. Grant the ‘all-analysts’ group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country- group.  Create a group per country. Add analysts to their respective country-groups. Create a single group‘all_analysts’, and add all country-groups as members. Grant the ‘all-analysts’ group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country- group.  Create a group per country. Add analysts to their respective country-groups. Create a single group‘all_analysts’, and add all country-groups as members. Grant the ‘all-analysts’ group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country- group.  Create a group per country. Add analysts to their respective country-groups. Create a single group‘all_analysts’, and add all country-groups as members. Grant the ‘all-analysts’ group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country- group. Q129. Your architecture calls for the centralized collection of all admin activity and VM system logs within yourproject.How should you collect these logs from both VMs and services?  All admin and VM system logs are automatically collected by Stackdriver.  Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agentmust be installed on each instance to collect system logs.  Launch a custom syslogd compute instance and configure your GCP project and VMs to forward alllogs to it.  Install the Stackdriver Logging agent on a single compute instance and let it collect all audit and accesslogs for your environment. Explanation/Reference:Reference https://cloud.google.com/logging/docs/agent/Q130. You have an App Engine application that needs to be updated. You want to test the update with productiontraffic before replacing the current application version.What should you do?  Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canarytesting.  Deploy the update as a new version in the App Engine application, and split traffic between the newand current versions.  Deploy the update in a new VPC, and use Google’s global HTTP load balancing to split traffic betweenthe update and current applications.  Deploy the update as a new App Engine application, and use Google’s global HTTP load balancing tosplit traffic between the new and current applications. Q131. The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.Which process should you implement?  Append metadata to file body.Compress individual files.Name files with serverName-Timestamp.Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket.Otherwise, save files to existing bucket  Batch every 10,000 events with a single manifest file for metadata.Compress event files and manifest file into a single archive file.Name files using serverName-EventSequence.Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.  Compress individual files.Name files with serverName-EventSequence.Save files to one bucketSet custom metadata headers for each object after saving.  Append metadata to file body.Compress individual files.Name files with a random prefix pattern.Save files to one bucket https://cloud.google.com/storage/docs/request-rateUse a naming convention that distributes load evenly across key rangesAuto-scaling of an index range can be slowed when using sequential names, such as object keys based on a sequence of numbers or timestamp. This occurs because requests are constantly shifting to a new index range, making redistributing the load harder and less effective.In order to maintain a high request rate, avoid using sequential names. Using completely random object names will give you the best load distribution.Q132. For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.The security team wants to run Airwolf against the predictive capability application as soon as it is released every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?  Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.  Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.  Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.  Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function. Q133. For this question, refer to the TerramEarth case study.TerramEarth’s CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the development team to focus their failure. You want to allow analysts to centrally query the vehicle dat a. Which architecture should you recommend?A)B)C)D)  Option A  Option B  Option C  Option D https://cloud.google.com/solutions/iot/https://cloud.google.com/solutions/designing-connected-vehicle-platformhttps://cloud.google.com/solutions/designing-connected-vehicle-platform#data_ingestionhttp://www.eweek.com/big-data-and-analytics/google-touts-value-of-cloud-iot-core-for-analyzing-connected-car-datahttps://cloud.google.com/solutions/iot/The push endpoint can be a load balancer.A container cluster can be used.Cloud Pub/Sub for Stream AnalyticsReferences: https://cloud.google.com/pubsub/https://cloud.google.com/solutions/iot/https://cloud.google.com/solutions/designing-connected-vehicle-platformhttps://cloud.google.com/solutions/designing-connected-vehicle-platform#data_ingestionhttp://www.eweek.com/big-data-and-analytics/google-touts-value-of-cloud-iot-core-for-analyzing-connected-car-datahttps://cloud.google.com/solutions/iot/Q134. Your company’s test suite is a custom C++ application that runs tests throughout each day on Linux virtual machines. The full test suite takes several hours to complete, running on a limited number of on premises servers reserved for testing. Your company wants to move the testing infrastructure to the cloud, to reduce the amount of time it takes to fully test a change to the system, while changing the tests as little as possible. Which cloud infrastructure should you recommend?  Google Compute Engine unmanaged instance groups and Network Load Balancer  Google Compute Engine managed instance groups with auto-scaling  Google Cloud Dataproc to run Apache Hadoop jobs to process each test  Google App Engine with Google Stackdriver for logging Q135. For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. What should you do?  Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.  Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.  Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.  Create a BigQuery time-partitioned table for the European data, and set the partition period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months. Explanationhttps://cloud.google.com/bigquery/docs/managing-partitioned-tables#partition-expirationhttps://cloud.google.com/storage/docs/lifecycleTopic 7, Mountkrik Games Case 2Company OverviewMountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools.Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.Solution ConceptMountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game’s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.Business Requirements* Increase to a global footprint.* Improve uptime – downtime is loss of players.* Increase efficiency of the cloud resources we use.* Reduce latency to all customers.Technical RequirementsRequirements for Game Backend Platform* Dynamically scale up or down based on game activity.* Connect to a transactional database service to manage user profiles and game state.* Store game activity in a timeseries database service for future analysis.* As the system scales, ensure that data is not lost due to processing backlogs.* Run hardened Linux distro.Requirements for Game Analytics Platform* Dynamically scale up or down based on game activity* Process incoming data on the fly directly from the game servers* Process data that arrives late because of slow mobile networks* Allow queries to access at least 10 TB of historical data* Process files that are regularly uploaded by users’ mobile devicesExecutive StatementOur last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game’s reputation. Our investors want more key performance indicators (KPIs) to evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.Q136. Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Googlerecommended practices. What should you do?  Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.  Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage.  Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage.  Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage. Explanationhttps://cloud.google.com/transfer-appliance/docs/2.0/faqQ137. You have an application deployed on Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a Service called echo-service. You need to perform an update to the application with minimal downtime to the application. What should you do?  Use kubectl set image deployment/echo-deployment <new-image>  Use the rolling update functionality of the Instance Group behind the Kubernetes cluster  Update the deployment yaml file with the new container image. Use kubectl delete deployment/ echo-deployment and kubectl create -f <yaml-file>  Update the service yaml file which the new container image. Use kubectl delete service/echoservice and kubectl create -f <yaml-file> Reference: https://cloud.google.com/kubernetes-engine/docs/how-to/updating-appsQ138. You are designing an application for use only during business hours. For the minimum viable product release, you’d like to use a managed product that automatically “scales to zero” so you don’t incur costs when there is no activity.Which primary compute resource should you choose?  Cloud Functions  Compute Engine  Google Kubernetes Engine  AppEngine flexible environment Q139. Case Study: 4 – Dress4Win case studyCompany OverviewDress4win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model.Company BackgroundDress4win’s application has grown from a few servers in the founder’s garage to several hundred servers and appliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application’s rapid growth. Because of this growth and the company’s desire to innovate faster, Dress4win is committing to a full migration to a public cloud.Solution ConceptFor the first phase of their migration to the cloud, Dress4win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.Existing Technical EnvironmentThe Dress4win application is served out of a single data center location.Databases:MySQL – user data, inventory, static dataRedis – metadata, social graph, cachingApplication servers:Tomcat – Java micro-servicesNginx – static contentApache Beam – Batch processingStorage appliances:iSCSI for VM hostsFiber channel SAN – MySQL databasesNAS – image storage, logs, backupsApache Hadoop/Spark servers:Data analysisReal-time trending calculationsMQ servers:MessagingSocial notificationsEventsMiscellaneous servers:Jenkins, monitoring, bastion hosts, security scannersBusiness RequirementsBuild a reliable and reproducible environment with scaled parity of production. Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.Improve business agility and speed of innovation through rapid provisioning of new resources.Analyze and optimize architecture for performance in the cloud. Migrate fully to the cloud if all other requirements are met.Technical RequirementsEvaluate and choose an automation framework for provisioning resources in cloud. Support failover of the production environment to cloud during an emergency. Identify production services that can migrate to cloud to save capacity.Use managed services whenever possible.Encrypt data on the wire and at rest.Support multiple VPN connections between the production data center and cloud environment.CEO StatementOur investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.CTO StatementWe have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.CFO StatementOur capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our current model.For this question, refer to the Dress4Win case study.You want to ensure Dress4Win’s sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top priority. Which cloud services should you choose?  Google Cloud Storage Coldline to store the data, and gsutil to access the data.  Google Cloud Storage Nearline to store the data, and gsutil to access the data.  Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.  BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data. References: https://cloud.google.com/storage/docs/storage-classesQ140. You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket.You want to optimize ongoing Cloud Storage spend. What should you do?  Write a lifecycle management rule in XML and push it to the bucket with gsutil.  Write a lifecycle management rule in JSON and push it to the bucket with gsutil.  Schedule a cron script using gsutil is -lr gs://backups/** to find and remove items older than 90 days.  Schedule a cron script using gsutil ls -1 gs://backups/** to find and remove items older than 90 days and schedule it with cron. Explanationhttps://cloud.google.com/storage/docs/gsutil/commands/lifecycleQ141. You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing a message were sent by a specific user.What should you do?  Tag messages client side with the originating user identifier and the destination user.  Encrypt the message client side using block-based encryption with a shared key.  Use public key infrastructure (PKI) to encrypt the message client side using the originating user’s private key.  Use a trusted certificate authority to enable SSL connectivity between the client application and the server. It is C as client side should encrypt the message using originating cert.Q142. For this question, refer to the Mountkirk Games case study.Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?  Create a scalable environment in GCP for simulating production load.  Use the existing infrastructure to test the GCP-based backend at scale.  Build stress tests into each component of your application using resources internal to GCP to simulate load.  Create a set of static environments in GCP to test different levels of load – for example, high, medium, and low.  Loading … Professional-Cloud-Architect Dumps and Practice Test (251 Exam Questions): https://www.examslabs.com/Google/Google-Cloud-Certified/best-Professional-Cloud-Architect-exam-dumps.html --------------------------------------------------- Images: https://blog.examslabs.com/wp-content/plugins/watu/loading.gif https://blog.examslabs.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2022-09-26 16:54:13 Post date GMT: 2022-09-26 16:54:13 Post modified date: 2022-09-26 16:54:13 Post modified date GMT: 2022-09-26 16:54:13